00:00:00.000 Started by upstream project "autotest-nightly" build number 4250 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3613 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.115 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.250 Using shallow fetch with depth 1 00:00:00.250 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.250 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.861 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.872 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.885 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.885 > git config core.sparsecheckout # timeout=10 00:00:07.897 > git read-tree -mu HEAD # timeout=10 00:00:07.915 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.933 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.933 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.015 [Pipeline] Start of Pipeline 00:00:08.025 [Pipeline] library 00:00:08.027 Loading library shm_lib@master 00:00:08.027 Library shm_lib@master is cached. Copying from home. 00:00:08.047 [Pipeline] node 00:00:08.054 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.056 [Pipeline] { 00:00:08.064 [Pipeline] catchError 00:00:08.065 [Pipeline] { 00:00:08.075 [Pipeline] wrap 00:00:08.083 [Pipeline] { 00:00:08.090 [Pipeline] stage 00:00:08.093 [Pipeline] { (Prologue) 00:00:08.301 [Pipeline] sh 00:00:08.588 + logger -p user.info -t JENKINS-CI 00:00:08.605 [Pipeline] echo 00:00:08.606 Node: WFP6 00:00:08.614 [Pipeline] sh 00:00:08.917 [Pipeline] setCustomBuildProperty 00:00:08.928 [Pipeline] echo 00:00:08.930 Cleanup processes 00:00:08.933 [Pipeline] sh 00:00:09.277 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.277 3555624 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.289 [Pipeline] sh 00:00:09.573 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.573 ++ grep -v 'sudo pgrep' 00:00:09.573 ++ awk '{print $1}' 00:00:09.573 + sudo kill -9 00:00:09.573 + true 00:00:09.589 [Pipeline] cleanWs 00:00:09.598 [WS-CLEANUP] Deleting project workspace... 00:00:09.598 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.604 [WS-CLEANUP] done 00:00:09.608 [Pipeline] setCustomBuildProperty 00:00:09.622 [Pipeline] sh 00:00:09.901 + sudo git config --global --replace-all safe.directory '*' 00:00:10.014 [Pipeline] httpRequest 00:00:10.651 [Pipeline] echo 00:00:10.652 Sorcerer 10.211.164.101 is alive 00:00:10.661 [Pipeline] retry 00:00:10.663 [Pipeline] { 00:00:10.676 [Pipeline] httpRequest 00:00:10.680 HttpMethod: GET 00:00:10.680 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.681 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.701 Response Code: HTTP/1.1 200 OK 00:00:10.702 Success: Status code 200 is in the accepted range: 200,404 00:00:10.702 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:41.577 [Pipeline] } 00:00:41.592 [Pipeline] // retry 00:00:41.599 [Pipeline] sh 00:00:41.883 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:41.899 [Pipeline] httpRequest 00:00:42.312 [Pipeline] echo 00:00:42.314 Sorcerer 10.211.164.101 is alive 00:00:42.322 [Pipeline] retry 00:00:42.324 [Pipeline] { 00:00:42.337 [Pipeline] httpRequest 00:00:42.340 HttpMethod: GET 00:00:42.341 URL: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:42.341 Sending request to url: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:42.351 Response Code: HTTP/1.1 200 OK 00:00:42.351 Success: Status code 200 is in the accepted range: 200,404 00:00:42.352 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:47.224 [Pipeline] } 00:01:47.242 [Pipeline] // retry 00:01:47.250 [Pipeline] sh 00:01:47.536 + tar --no-same-owner -xf spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:50.838 [Pipeline] sh 00:01:51.123 + git -C spdk log --oneline -n5 00:01:51.123 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:51.123 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:51.123 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:51.123 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:51.123 cc533a3e5 nvme/nvme: Factor out submit_request function 00:01:51.133 [Pipeline] } 00:01:51.146 [Pipeline] // stage 00:01:51.154 [Pipeline] stage 00:01:51.156 [Pipeline] { (Prepare) 00:01:51.171 [Pipeline] writeFile 00:01:51.185 [Pipeline] sh 00:01:51.469 + logger -p user.info -t JENKINS-CI 00:01:51.482 [Pipeline] sh 00:01:51.765 + logger -p user.info -t JENKINS-CI 00:01:51.776 [Pipeline] sh 00:01:52.061 + cat autorun-spdk.conf 00:01:52.061 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.061 SPDK_TEST_NVMF=1 00:01:52.061 SPDK_TEST_NVME_CLI=1 00:01:52.061 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.061 SPDK_TEST_NVMF_NICS=e810 00:01:52.061 SPDK_RUN_ASAN=1 00:01:52.061 SPDK_RUN_UBSAN=1 00:01:52.061 NET_TYPE=phy 00:01:52.068 RUN_NIGHTLY=1 00:01:52.072 [Pipeline] readFile 00:01:52.098 [Pipeline] withEnv 00:01:52.100 [Pipeline] { 00:01:52.113 [Pipeline] sh 00:01:52.398 + set -ex 00:01:52.398 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:52.398 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:52.398 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.398 ++ SPDK_TEST_NVMF=1 00:01:52.398 ++ SPDK_TEST_NVME_CLI=1 00:01:52.398 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.398 ++ SPDK_TEST_NVMF_NICS=e810 00:01:52.398 ++ SPDK_RUN_ASAN=1 00:01:52.398 ++ SPDK_RUN_UBSAN=1 00:01:52.398 ++ NET_TYPE=phy 00:01:52.398 ++ RUN_NIGHTLY=1 00:01:52.398 + case $SPDK_TEST_NVMF_NICS in 00:01:52.398 + DRIVERS=ice 00:01:52.398 + [[ tcp == \r\d\m\a ]] 00:01:52.398 + [[ -n ice ]] 00:01:52.398 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:52.398 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:52.398 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:52.398 rmmod: ERROR: Module irdma is not currently loaded 00:01:52.398 rmmod: ERROR: Module i40iw is not currently loaded 00:01:52.398 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:52.398 + true 00:01:52.398 + for D in $DRIVERS 00:01:52.398 + sudo modprobe ice 00:01:52.398 + exit 0 00:01:52.407 [Pipeline] } 00:01:52.421 [Pipeline] // withEnv 00:01:52.426 [Pipeline] } 00:01:52.439 [Pipeline] // stage 00:01:52.447 [Pipeline] catchError 00:01:52.448 [Pipeline] { 00:01:52.460 [Pipeline] timeout 00:01:52.461 Timeout set to expire in 1 hr 0 min 00:01:52.462 [Pipeline] { 00:01:52.476 [Pipeline] stage 00:01:52.478 [Pipeline] { (Tests) 00:01:52.490 [Pipeline] sh 00:01:52.775 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.775 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.775 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.775 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:52.775 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.775 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:52.775 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.775 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.775 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:52.775 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.775 + source /etc/os-release 00:01:52.775 ++ NAME='Fedora Linux' 00:01:52.775 ++ VERSION='39 (Cloud Edition)' 00:01:52.775 ++ ID=fedora 00:01:52.775 ++ VERSION_ID=39 00:01:52.775 ++ VERSION_CODENAME= 00:01:52.775 ++ PLATFORM_ID=platform:f39 00:01:52.775 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.775 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.775 ++ LOGO=fedora-logo-icon 00:01:52.775 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.775 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.775 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.775 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.775 ++ SUPPORT_END=2024-11-12 00:01:52.775 ++ VARIANT='Cloud Edition' 00:01:52.775 ++ VARIANT_ID=cloud 00:01:52.775 + uname -a 00:01:52.775 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.775 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:55.312 Hugepages 00:01:55.312 node hugesize free / total 00:01:55.312 node0 1048576kB 0 / 0 00:01:55.312 node0 2048kB 0 / 0 00:01:55.312 node1 1048576kB 0 / 0 00:01:55.312 node1 2048kB 0 / 0 00:01:55.312 00:01:55.312 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.312 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:55.312 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:55.312 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:55.312 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:55.312 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:55.312 + rm -f /tmp/spdk-ld-path 00:01:55.312 + source autorun-spdk.conf 00:01:55.312 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.312 ++ SPDK_TEST_NVMF=1 00:01:55.312 ++ SPDK_TEST_NVME_CLI=1 00:01:55.312 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.312 ++ SPDK_TEST_NVMF_NICS=e810 00:01:55.312 ++ SPDK_RUN_ASAN=1 00:01:55.312 ++ SPDK_RUN_UBSAN=1 00:01:55.312 ++ NET_TYPE=phy 00:01:55.312 ++ RUN_NIGHTLY=1 00:01:55.312 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.312 + [[ -n '' ]] 00:01:55.312 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.312 + for M in /var/spdk/build-*-manifest.txt 00:01:55.312 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.312 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.312 + for M in /var/spdk/build-*-manifest.txt 00:01:55.312 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.312 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.312 + for M in /var/spdk/build-*-manifest.txt 00:01:55.312 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.312 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.312 ++ uname 00:01:55.312 + [[ Linux == \L\i\n\u\x ]] 00:01:55.312 + sudo dmesg -T 00:01:55.571 + sudo dmesg --clear 00:01:55.571 + dmesg_pid=3557059 00:01:55.571 + [[ Fedora Linux == FreeBSD ]] 00:01:55.571 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.571 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.571 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.571 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.571 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.571 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.571 + sudo dmesg -Tw 00:01:55.571 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.571 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.571 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.571 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.571 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.571 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.571 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.571 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.571 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.571 15:06:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:55.571 15:06:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:55.571 15:06:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:55.571 15:06:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:55.571 15:06:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.571 15:06:23 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:55.571 15:06:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:55.571 15:06:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.571 15:06:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.571 15:06:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.571 15:06:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.571 15:06:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.572 15:06:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.572 15:06:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.572 15:06:23 -- paths/export.sh@5 -- $ export PATH 00:01:55.572 15:06:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.572 15:06:23 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:55.572 15:06:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:55.572 15:06:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730901983.XXXXXX 00:01:55.572 15:06:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730901983.frNRpi 00:01:55.572 15:06:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:55.572 15:06:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:55.572 15:06:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:55.572 15:06:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:55.572 15:06:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.572 15:06:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:55.572 15:06:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:55.572 15:06:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.572 15:06:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:55.572 15:06:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:55.572 15:06:23 -- pm/common@17 -- $ local monitor 00:01:55.572 15:06:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.572 15:06:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.572 15:06:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.572 15:06:23 -- pm/common@21 -- $ date +%s 00:01:55.572 15:06:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.572 15:06:23 -- pm/common@21 -- $ date +%s 00:01:55.572 15:06:23 -- pm/common@25 -- $ sleep 1 00:01:55.572 15:06:23 -- pm/common@21 -- $ date +%s 00:01:55.572 15:06:23 -- pm/common@21 -- $ date +%s 00:01:55.572 15:06:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901983 00:01:55.572 15:06:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901983 00:01:55.572 15:06:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901983 00:01:55.572 15:06:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730901983 00:01:55.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901983_collect-vmstat.pm.log 00:01:55.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901983_collect-cpu-load.pm.log 00:01:55.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901983_collect-cpu-temp.pm.log 00:01:55.872 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730901983_collect-bmc-pm.bmc.pm.log 00:01:56.814 15:06:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:56.814 15:06:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.814 15:06:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.814 15:06:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:56.814 15:06:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.814 Wed Nov 6 02:06:24 PM UTC 2024 00:01:56.814 15:06:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.814 v25.01-pre-170-gd1c46ed8e 00:01:56.814 15:06:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.814 15:06:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.814 15:06:24 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:56.814 15:06:24 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:56.814 15:06:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.814 ************************************ 00:01:56.814 START TEST asan 00:01:56.814 ************************************ 00:01:56.814 15:06:24 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:56.814 using asan 00:01:56.814 00:01:56.814 real 0m0.000s 00:01:56.814 user 0m0.000s 00:01:56.814 sys 0m0.000s 00:01:56.814 15:06:24 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:56.814 15:06:24 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.814 ************************************ 00:01:56.814 END TEST asan 00:01:56.814 ************************************ 00:01:56.814 15:06:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.814 15:06:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.814 15:06:24 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:56.814 15:06:24 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:56.814 15:06:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.814 ************************************ 00:01:56.814 START TEST ubsan 00:01:56.814 ************************************ 00:01:56.814 15:06:24 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:56.814 using ubsan 00:01:56.814 00:01:56.814 real 0m0.000s 00:01:56.814 user 0m0.000s 00:01:56.814 sys 0m0.000s 00:01:56.814 15:06:24 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:56.814 15:06:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.814 ************************************ 00:01:56.814 END TEST ubsan 00:01:56.814 ************************************ 00:01:56.814 15:06:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:56.814 15:06:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.814 15:06:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.814 15:06:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:56.814 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:56.814 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.382 Using 'verbs' RDMA provider 00:02:10.159 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:22.380 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:22.380 Creating mk/config.mk...done. 00:02:22.380 Creating mk/cc.flags.mk...done. 00:02:22.380 Type 'make' to build. 00:02:22.380 15:06:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:22.380 15:06:49 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:22.380 15:06:49 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:22.380 15:06:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.380 ************************************ 00:02:22.380 START TEST make 00:02:22.380 ************************************ 00:02:22.380 15:06:49 make -- common/autotest_common.sh@1127 -- $ make -j96 00:02:22.639 make[1]: Nothing to be done for 'all'. 00:02:30.767 The Meson build system 00:02:30.767 Version: 1.5.0 00:02:30.767 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:30.767 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:30.767 Build type: native build 00:02:30.767 Program cat found: YES (/usr/bin/cat) 00:02:30.767 Project name: DPDK 00:02:30.767 Project version: 24.03.0 00:02:30.767 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.767 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.767 Host machine cpu family: x86_64 00:02:30.767 Host machine cpu: x86_64 00:02:30.767 Message: ## Building in Developer Mode ## 00:02:30.767 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:30.767 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:30.767 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:30.767 Program python3 found: YES (/usr/bin/python3) 00:02:30.767 Program cat found: YES (/usr/bin/cat) 00:02:30.767 Compiler for C supports arguments -march=native: YES 00:02:30.767 Checking for size of "void *" : 8 00:02:30.767 Checking for size of "void *" : 8 (cached) 00:02:30.767 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:30.767 Library m found: YES 00:02:30.767 Library numa found: YES 00:02:30.767 Has header "numaif.h" : YES 00:02:30.767 Library fdt found: NO 00:02:30.767 Library execinfo found: NO 00:02:30.767 Has header "execinfo.h" : YES 00:02:30.767 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.767 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:30.767 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:30.767 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:30.767 Run-time dependency openssl found: YES 3.1.1 00:02:30.767 Run-time dependency libpcap found: YES 1.10.4 00:02:30.767 Has header "pcap.h" with dependency libpcap: YES 00:02:30.767 Compiler for C supports arguments -Wcast-qual: YES 00:02:30.767 Compiler for C supports arguments -Wdeprecated: YES 00:02:30.767 Compiler for C supports arguments -Wformat: YES 00:02:30.767 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:30.767 Compiler for C supports arguments -Wformat-security: NO 00:02:30.767 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.767 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:30.767 Compiler for C supports arguments -Wnested-externs: YES 00:02:30.767 Compiler for C supports arguments -Wold-style-definition: YES 00:02:30.767 Compiler for C supports arguments -Wpointer-arith: YES 00:02:30.767 Compiler for C supports arguments -Wsign-compare: YES 00:02:30.767 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:30.767 Compiler for C supports arguments -Wundef: YES 00:02:30.767 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.767 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:30.767 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:30.767 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.767 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:30.767 Program objdump found: YES (/usr/bin/objdump) 00:02:30.767 Compiler for C supports arguments -mavx512f: YES 00:02:30.767 Checking if "AVX512 checking" compiles: YES 00:02:30.767 Fetching value of define "__SSE4_2__" : 1 00:02:30.767 Fetching value of define "__AES__" : 1 00:02:30.767 Fetching value of define "__AVX__" : 1 00:02:30.767 Fetching value of define "__AVX2__" : 1 00:02:30.767 Fetching value of define "__AVX512BW__" : 1 00:02:30.767 Fetching value of define "__AVX512CD__" : 1 00:02:30.767 Fetching value of define "__AVX512DQ__" : 1 00:02:30.767 Fetching value of define "__AVX512F__" : 1 00:02:30.767 Fetching value of define "__AVX512VL__" : 1 00:02:30.767 Fetching value of define "__PCLMUL__" : 1 00:02:30.767 Fetching value of define "__RDRND__" : 1 00:02:30.767 Fetching value of define "__RDSEED__" : 1 00:02:30.767 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:30.767 Fetching value of define "__znver1__" : (undefined) 00:02:30.767 Fetching value of define "__znver2__" : (undefined) 00:02:30.767 Fetching value of define "__znver3__" : (undefined) 00:02:30.767 Fetching value of define "__znver4__" : (undefined) 00:02:30.767 Library asan found: YES 00:02:30.767 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:30.767 Message: lib/log: Defining dependency "log" 00:02:30.767 Message: lib/kvargs: Defining dependency "kvargs" 00:02:30.767 Message: lib/telemetry: Defining dependency "telemetry" 00:02:30.767 Library rt found: YES 00:02:30.767 Checking for function "getentropy" : NO 00:02:30.767 Message: lib/eal: Defining dependency "eal" 00:02:30.767 Message: lib/ring: Defining dependency "ring" 00:02:30.767 Message: lib/rcu: Defining dependency "rcu" 00:02:30.767 Message: lib/mempool: Defining dependency "mempool" 00:02:30.767 Message: lib/mbuf: Defining dependency "mbuf" 00:02:30.767 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:30.767 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.767 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.767 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:30.767 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:30.767 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:30.767 Compiler for C supports arguments -mpclmul: YES 00:02:30.767 Compiler for C supports arguments -maes: YES 00:02:30.767 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.767 Compiler for C supports arguments -mavx512bw: YES 00:02:30.767 Compiler for C supports arguments -mavx512dq: YES 00:02:30.767 Compiler for C supports arguments -mavx512vl: YES 00:02:30.767 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:30.767 Compiler for C supports arguments -mavx2: YES 00:02:30.767 Compiler for C supports arguments -mavx: YES 00:02:30.767 Message: lib/net: Defining dependency "net" 00:02:30.767 Message: lib/meter: Defining dependency "meter" 00:02:30.767 Message: lib/ethdev: Defining dependency "ethdev" 00:02:30.767 Message: lib/pci: Defining dependency "pci" 00:02:30.767 Message: lib/cmdline: Defining dependency "cmdline" 00:02:30.767 Message: lib/hash: Defining dependency "hash" 00:02:30.767 Message: lib/timer: Defining dependency "timer" 00:02:30.767 Message: lib/compressdev: Defining dependency "compressdev" 00:02:30.767 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:30.767 Message: lib/dmadev: Defining dependency "dmadev" 00:02:30.767 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:30.767 Message: lib/power: Defining dependency "power" 00:02:30.767 Message: lib/reorder: Defining dependency "reorder" 00:02:30.767 Message: lib/security: Defining dependency "security" 00:02:30.767 Has header "linux/userfaultfd.h" : YES 00:02:30.767 Has header "linux/vduse.h" : YES 00:02:30.767 Message: lib/vhost: Defining dependency "vhost" 00:02:30.767 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:30.767 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:30.767 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:30.767 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:30.767 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:30.767 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:30.767 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:30.767 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:30.767 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:30.767 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:30.767 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:30.767 Configuring doxy-api-html.conf using configuration 00:02:30.767 Configuring doxy-api-man.conf using configuration 00:02:30.767 Program mandb found: YES (/usr/bin/mandb) 00:02:30.767 Program sphinx-build found: NO 00:02:30.768 Configuring rte_build_config.h using configuration 00:02:30.768 Message: 00:02:30.768 ================= 00:02:30.768 Applications Enabled 00:02:30.768 ================= 00:02:30.768 00:02:30.768 apps: 00:02:30.768 00:02:30.768 00:02:30.768 Message: 00:02:30.768 ================= 00:02:30.768 Libraries Enabled 00:02:30.768 ================= 00:02:30.768 00:02:30.768 libs: 00:02:30.768 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:30.768 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:30.768 cryptodev, dmadev, power, reorder, security, vhost, 00:02:30.768 00:02:30.768 Message: 00:02:30.768 =============== 00:02:30.768 Drivers Enabled 00:02:30.768 =============== 00:02:30.768 00:02:30.768 common: 00:02:30.768 00:02:30.768 bus: 00:02:30.768 pci, vdev, 00:02:30.768 mempool: 00:02:30.768 ring, 00:02:30.768 dma: 00:02:30.768 00:02:30.768 net: 00:02:30.768 00:02:30.768 crypto: 00:02:30.768 00:02:30.768 compress: 00:02:30.768 00:02:30.768 vdpa: 00:02:30.768 00:02:30.768 00:02:30.768 Message: 00:02:30.768 ================= 00:02:30.768 Content Skipped 00:02:30.768 ================= 00:02:30.768 00:02:30.768 apps: 00:02:30.768 dumpcap: explicitly disabled via build config 00:02:30.768 graph: explicitly disabled via build config 00:02:30.768 pdump: explicitly disabled via build config 00:02:30.768 proc-info: explicitly disabled via build config 00:02:30.768 test-acl: explicitly disabled via build config 00:02:30.768 test-bbdev: explicitly disabled via build config 00:02:30.768 test-cmdline: explicitly disabled via build config 00:02:30.768 test-compress-perf: explicitly disabled via build config 00:02:30.768 test-crypto-perf: explicitly disabled via build config 00:02:30.768 test-dma-perf: explicitly disabled via build config 00:02:30.768 test-eventdev: explicitly disabled via build config 00:02:30.768 test-fib: explicitly disabled via build config 00:02:30.768 test-flow-perf: explicitly disabled via build config 00:02:30.768 test-gpudev: explicitly disabled via build config 00:02:30.768 test-mldev: explicitly disabled via build config 00:02:30.768 test-pipeline: explicitly disabled via build config 00:02:30.768 test-pmd: explicitly disabled via build config 00:02:30.768 test-regex: explicitly disabled via build config 00:02:30.768 test-sad: explicitly disabled via build config 00:02:30.768 test-security-perf: explicitly disabled via build config 00:02:30.768 00:02:30.768 libs: 00:02:30.768 argparse: explicitly disabled via build config 00:02:30.768 metrics: explicitly disabled via build config 00:02:30.768 acl: explicitly disabled via build config 00:02:30.768 bbdev: explicitly disabled via build config 00:02:30.768 bitratestats: explicitly disabled via build config 00:02:30.768 bpf: explicitly disabled via build config 00:02:30.768 cfgfile: explicitly disabled via build config 00:02:30.768 distributor: explicitly disabled via build config 00:02:30.768 efd: explicitly disabled via build config 00:02:30.768 eventdev: explicitly disabled via build config 00:02:30.768 dispatcher: explicitly disabled via build config 00:02:30.768 gpudev: explicitly disabled via build config 00:02:30.768 gro: explicitly disabled via build config 00:02:30.768 gso: explicitly disabled via build config 00:02:30.768 ip_frag: explicitly disabled via build config 00:02:30.768 jobstats: explicitly disabled via build config 00:02:30.768 latencystats: explicitly disabled via build config 00:02:30.768 lpm: explicitly disabled via build config 00:02:30.768 member: explicitly disabled via build config 00:02:30.768 pcapng: explicitly disabled via build config 00:02:30.768 rawdev: explicitly disabled via build config 00:02:30.768 regexdev: explicitly disabled via build config 00:02:30.768 mldev: explicitly disabled via build config 00:02:30.768 rib: explicitly disabled via build config 00:02:30.768 sched: explicitly disabled via build config 00:02:30.768 stack: explicitly disabled via build config 00:02:30.768 ipsec: explicitly disabled via build config 00:02:30.768 pdcp: explicitly disabled via build config 00:02:30.768 fib: explicitly disabled via build config 00:02:30.768 port: explicitly disabled via build config 00:02:30.768 pdump: explicitly disabled via build config 00:02:30.768 table: explicitly disabled via build config 00:02:30.768 pipeline: explicitly disabled via build config 00:02:30.768 graph: explicitly disabled via build config 00:02:30.768 node: explicitly disabled via build config 00:02:30.768 00:02:30.768 drivers: 00:02:30.768 common/cpt: not in enabled drivers build config 00:02:30.768 common/dpaax: not in enabled drivers build config 00:02:30.768 common/iavf: not in enabled drivers build config 00:02:30.768 common/idpf: not in enabled drivers build config 00:02:30.768 common/ionic: not in enabled drivers build config 00:02:30.768 common/mvep: not in enabled drivers build config 00:02:30.768 common/octeontx: not in enabled drivers build config 00:02:30.768 bus/auxiliary: not in enabled drivers build config 00:02:30.768 bus/cdx: not in enabled drivers build config 00:02:30.768 bus/dpaa: not in enabled drivers build config 00:02:30.768 bus/fslmc: not in enabled drivers build config 00:02:30.768 bus/ifpga: not in enabled drivers build config 00:02:30.768 bus/platform: not in enabled drivers build config 00:02:30.768 bus/uacce: not in enabled drivers build config 00:02:30.768 bus/vmbus: not in enabled drivers build config 00:02:30.768 common/cnxk: not in enabled drivers build config 00:02:30.768 common/mlx5: not in enabled drivers build config 00:02:30.768 common/nfp: not in enabled drivers build config 00:02:30.768 common/nitrox: not in enabled drivers build config 00:02:30.768 common/qat: not in enabled drivers build config 00:02:30.768 common/sfc_efx: not in enabled drivers build config 00:02:30.768 mempool/bucket: not in enabled drivers build config 00:02:30.768 mempool/cnxk: not in enabled drivers build config 00:02:30.768 mempool/dpaa: not in enabled drivers build config 00:02:30.768 mempool/dpaa2: not in enabled drivers build config 00:02:30.768 mempool/octeontx: not in enabled drivers build config 00:02:30.768 mempool/stack: not in enabled drivers build config 00:02:30.768 dma/cnxk: not in enabled drivers build config 00:02:30.768 dma/dpaa: not in enabled drivers build config 00:02:30.768 dma/dpaa2: not in enabled drivers build config 00:02:30.768 dma/hisilicon: not in enabled drivers build config 00:02:30.768 dma/idxd: not in enabled drivers build config 00:02:30.768 dma/ioat: not in enabled drivers build config 00:02:30.768 dma/skeleton: not in enabled drivers build config 00:02:30.768 net/af_packet: not in enabled drivers build config 00:02:30.768 net/af_xdp: not in enabled drivers build config 00:02:30.768 net/ark: not in enabled drivers build config 00:02:30.768 net/atlantic: not in enabled drivers build config 00:02:30.768 net/avp: not in enabled drivers build config 00:02:30.768 net/axgbe: not in enabled drivers build config 00:02:30.768 net/bnx2x: not in enabled drivers build config 00:02:30.768 net/bnxt: not in enabled drivers build config 00:02:30.768 net/bonding: not in enabled drivers build config 00:02:30.768 net/cnxk: not in enabled drivers build config 00:02:30.768 net/cpfl: not in enabled drivers build config 00:02:30.768 net/cxgbe: not in enabled drivers build config 00:02:30.768 net/dpaa: not in enabled drivers build config 00:02:30.768 net/dpaa2: not in enabled drivers build config 00:02:30.768 net/e1000: not in enabled drivers build config 00:02:30.768 net/ena: not in enabled drivers build config 00:02:30.768 net/enetc: not in enabled drivers build config 00:02:30.768 net/enetfec: not in enabled drivers build config 00:02:30.768 net/enic: not in enabled drivers build config 00:02:30.768 net/failsafe: not in enabled drivers build config 00:02:30.768 net/fm10k: not in enabled drivers build config 00:02:30.768 net/gve: not in enabled drivers build config 00:02:30.768 net/hinic: not in enabled drivers build config 00:02:30.768 net/hns3: not in enabled drivers build config 00:02:30.768 net/i40e: not in enabled drivers build config 00:02:30.768 net/iavf: not in enabled drivers build config 00:02:30.768 net/ice: not in enabled drivers build config 00:02:30.768 net/idpf: not in enabled drivers build config 00:02:30.768 net/igc: not in enabled drivers build config 00:02:30.768 net/ionic: not in enabled drivers build config 00:02:30.768 net/ipn3ke: not in enabled drivers build config 00:02:30.768 net/ixgbe: not in enabled drivers build config 00:02:30.768 net/mana: not in enabled drivers build config 00:02:30.768 net/memif: not in enabled drivers build config 00:02:30.768 net/mlx4: not in enabled drivers build config 00:02:30.768 net/mlx5: not in enabled drivers build config 00:02:30.768 net/mvneta: not in enabled drivers build config 00:02:30.768 net/mvpp2: not in enabled drivers build config 00:02:30.769 net/netvsc: not in enabled drivers build config 00:02:30.769 net/nfb: not in enabled drivers build config 00:02:30.769 net/nfp: not in enabled drivers build config 00:02:30.769 net/ngbe: not in enabled drivers build config 00:02:30.769 net/null: not in enabled drivers build config 00:02:30.769 net/octeontx: not in enabled drivers build config 00:02:30.769 net/octeon_ep: not in enabled drivers build config 00:02:30.769 net/pcap: not in enabled drivers build config 00:02:30.769 net/pfe: not in enabled drivers build config 00:02:30.769 net/qede: not in enabled drivers build config 00:02:30.769 net/ring: not in enabled drivers build config 00:02:30.769 net/sfc: not in enabled drivers build config 00:02:30.769 net/softnic: not in enabled drivers build config 00:02:30.769 net/tap: not in enabled drivers build config 00:02:30.769 net/thunderx: not in enabled drivers build config 00:02:30.769 net/txgbe: not in enabled drivers build config 00:02:30.769 net/vdev_netvsc: not in enabled drivers build config 00:02:30.769 net/vhost: not in enabled drivers build config 00:02:30.769 net/virtio: not in enabled drivers build config 00:02:30.769 net/vmxnet3: not in enabled drivers build config 00:02:30.769 raw/*: missing internal dependency, "rawdev" 00:02:30.769 crypto/armv8: not in enabled drivers build config 00:02:30.769 crypto/bcmfs: not in enabled drivers build config 00:02:30.769 crypto/caam_jr: not in enabled drivers build config 00:02:30.769 crypto/ccp: not in enabled drivers build config 00:02:30.769 crypto/cnxk: not in enabled drivers build config 00:02:30.769 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.769 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.769 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.769 crypto/mlx5: not in enabled drivers build config 00:02:30.769 crypto/mvsam: not in enabled drivers build config 00:02:30.769 crypto/nitrox: not in enabled drivers build config 00:02:30.769 crypto/null: not in enabled drivers build config 00:02:30.769 crypto/octeontx: not in enabled drivers build config 00:02:30.769 crypto/openssl: not in enabled drivers build config 00:02:30.769 crypto/scheduler: not in enabled drivers build config 00:02:30.769 crypto/uadk: not in enabled drivers build config 00:02:30.769 crypto/virtio: not in enabled drivers build config 00:02:30.769 compress/isal: not in enabled drivers build config 00:02:30.769 compress/mlx5: not in enabled drivers build config 00:02:30.769 compress/nitrox: not in enabled drivers build config 00:02:30.769 compress/octeontx: not in enabled drivers build config 00:02:30.769 compress/zlib: not in enabled drivers build config 00:02:30.769 regex/*: missing internal dependency, "regexdev" 00:02:30.769 ml/*: missing internal dependency, "mldev" 00:02:30.769 vdpa/ifc: not in enabled drivers build config 00:02:30.769 vdpa/mlx5: not in enabled drivers build config 00:02:30.769 vdpa/nfp: not in enabled drivers build config 00:02:30.769 vdpa/sfc: not in enabled drivers build config 00:02:30.769 event/*: missing internal dependency, "eventdev" 00:02:30.769 baseband/*: missing internal dependency, "bbdev" 00:02:30.769 gpu/*: missing internal dependency, "gpudev" 00:02:30.769 00:02:30.769 00:02:31.028 Build targets in project: 85 00:02:31.028 00:02:31.028 DPDK 24.03.0 00:02:31.029 00:02:31.029 User defined options 00:02:31.029 buildtype : debug 00:02:31.029 default_library : shared 00:02:31.029 libdir : lib 00:02:31.029 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:31.029 b_sanitize : address 00:02:31.029 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:31.029 c_link_args : 00:02:31.029 cpu_instruction_set: native 00:02:31.029 disable_apps : pdump,dumpcap,test-cmdline,test-pmd,test-crypto-perf,test-gpudev,proc-info,graph,test-flow-perf,test-compress-perf,test-fib,test-regex,test-eventdev,test-security-perf,test,test-dma-perf,test-acl,test-pipeline,test-bbdev,test-sad,test-mldev 00:02:31.029 disable_libs : pdump,gpudev,rawdev,pcapng,node,metrics,bitratestats,member,pdcp,eventdev,lpm,table,distributor,regexdev,bpf,acl,stack,ipsec,graph,pipeline,gso,latencystats,jobstats,port,cfgfile,dispatcher,sched,bbdev,gro,rib,argparse,fib,efd,mldev,ip_frag 00:02:31.029 enable_docs : false 00:02:31.029 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:31.029 enable_kmods : false 00:02:31.029 max_lcores : 128 00:02:31.029 tests : false 00:02:31.029 00:02:31.029 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.608 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:31.608 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.608 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:31.608 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.608 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:31.608 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:31.608 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.608 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.608 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.608 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.608 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.868 [11/268] Linking static target lib/librte_kvargs.a 00:02:31.868 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.868 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.868 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.868 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:31.868 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.868 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:31.868 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.868 [19/268] Linking static target lib/librte_log.a 00:02:31.868 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.868 [21/268] Linking static target lib/librte_pci.a 00:02:31.868 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.868 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.133 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.133 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.133 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:32.133 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:32.133 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:32.133 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:32.133 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.133 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.133 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.133 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.133 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:32.133 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.133 [36/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:32.133 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.133 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:32.133 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.133 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.133 [41/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.133 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:32.133 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:32.133 [44/268] Linking static target lib/librte_meter.a 00:02:32.133 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:32.133 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.133 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.133 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:32.133 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:32.133 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:32.133 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.133 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.133 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.133 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:32.133 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:32.133 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.133 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.391 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:32.391 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:32.391 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:32.391 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.391 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:32.391 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:32.391 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.391 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.391 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:32.391 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:32.391 [68/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.391 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:32.391 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:32.391 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.391 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:32.391 [73/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.391 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.391 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.391 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.391 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.391 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.391 [79/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.391 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.391 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.391 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.391 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.391 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.391 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:32.391 [86/268] Linking static target lib/librte_ring.a 00:02:32.391 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.391 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.391 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.391 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.392 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.392 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:32.392 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:32.392 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:32.392 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.392 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.392 [97/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.392 [98/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.392 [99/268] Linking static target lib/librte_telemetry.a 00:02:32.392 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.392 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.392 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.392 [103/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.392 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.392 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:32.392 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.392 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:32.392 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.392 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.392 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.392 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:32.392 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.392 [113/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.392 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.650 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.650 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.650 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.650 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.650 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.650 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.650 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.650 [122/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.650 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.650 [124/268] Linking static target lib/librte_cmdline.a 00:02:32.650 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.650 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.650 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.650 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.650 [129/268] Linking static target lib/librte_mempool.a 00:02:32.650 [130/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:32.650 [131/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.650 [132/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.650 [133/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.650 [134/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.650 [135/268] Linking static target lib/librte_net.a 00:02:32.650 [136/268] Linking target lib/librte_log.so.24.1 00:02:32.650 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.650 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.650 [139/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.650 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.650 [141/268] Linking static target lib/librte_rcu.a 00:02:32.650 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.650 [143/268] Linking static target lib/librte_timer.a 00:02:32.650 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.650 [145/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.650 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.650 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.909 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:32.909 [149/268] Linking static target lib/librte_eal.a 00:02:32.909 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.909 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.909 [152/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.909 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.909 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.909 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.909 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:32.909 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:32.909 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.909 [159/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.909 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.909 [161/268] Linking static target lib/librte_dmadev.a 00:02:32.909 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.909 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.909 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.909 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.909 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.909 [167/268] Linking target lib/librte_kvargs.so.24.1 00:02:32.909 [168/268] Linking static target lib/librte_compressdev.a 00:02:32.909 [169/268] Linking target lib/librte_telemetry.so.24.1 00:02:32.909 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.909 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.909 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.909 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.909 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.909 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.909 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.909 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:32.909 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.909 [179/268] Linking static target lib/librte_power.a 00:02:32.909 [180/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.909 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.909 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.909 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.909 [184/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:33.168 [185/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:33.168 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.168 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.168 [188/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.168 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.168 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.168 [191/268] Linking static target drivers/librte_bus_vdev.a 00:02:33.168 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.168 [193/268] Linking static target lib/librte_mbuf.a 00:02:33.168 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.168 [195/268] Linking static target lib/librte_reorder.a 00:02:33.168 [196/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.168 [197/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.169 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.169 [199/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.169 [200/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.169 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:33.169 [202/268] Linking static target drivers/librte_mempool_ring.a 00:02:33.169 [203/268] Linking static target lib/librte_security.a 00:02:33.169 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:33.427 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.427 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.427 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:33.427 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.427 [209/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.427 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.427 [211/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.427 [212/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.427 [213/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.427 [214/268] Linking static target lib/librte_hash.a 00:02:33.686 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.686 [216/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.686 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.686 [218/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.944 [219/268] Linking static target lib/librte_cryptodev.a 00:02:33.944 [220/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.944 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.944 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.944 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.513 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.514 [225/268] Linking static target lib/librte_ethdev.a 00:02:34.514 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.450 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.709 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.050 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.050 [230/268] Linking static target lib/librte_vhost.a 00:02:40.425 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.328 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.894 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.894 [234/268] Linking target lib/librte_eal.so.24.1 00:02:43.153 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.153 [236/268] Linking target lib/librte_meter.so.24.1 00:02:43.153 [237/268] Linking target lib/librte_ring.so.24.1 00:02:43.153 [238/268] Linking target lib/librte_timer.so.24.1 00:02:43.153 [239/268] Linking target lib/librte_pci.so.24.1 00:02:43.153 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.153 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:43.153 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:43.153 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:43.153 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:43.153 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:43.153 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:43.412 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:43.412 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:43.412 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:43.412 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:43.412 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:43.412 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:43.412 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:43.670 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:43.670 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:43.670 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:43.670 [257/268] Linking target lib/librte_net.so.24.1 00:02:43.670 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:43.929 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.929 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.929 [261/268] Linking target lib/librte_hash.so.24.1 00:02:43.929 [262/268] Linking target lib/librte_security.so.24.1 00:02:43.929 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:43.929 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:43.929 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.929 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.188 [267/268] Linking target lib/librte_power.so.24.1 00:02:44.188 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:44.188 INFO: autodetecting backend as ninja 00:02:44.188 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:54.159 CC lib/ut/ut.o 00:02:54.159 CC lib/log/log.o 00:02:54.159 CC lib/ut_mock/mock.o 00:02:54.159 CC lib/log/log_flags.o 00:02:54.159 CC lib/log/log_deprecated.o 00:02:54.159 LIB libspdk_ut.a 00:02:54.159 LIB libspdk_ut_mock.a 00:02:54.159 LIB libspdk_log.a 00:02:54.159 SO libspdk_ut.so.2.0 00:02:54.159 SO libspdk_ut_mock.so.6.0 00:02:54.159 SO libspdk_log.so.7.1 00:02:54.159 SYMLINK libspdk_ut_mock.so 00:02:54.159 SYMLINK libspdk_ut.so 00:02:54.159 SYMLINK libspdk_log.so 00:02:54.417 CC lib/dma/dma.o 00:02:54.417 CC lib/util/base64.o 00:02:54.417 CC lib/util/bit_array.o 00:02:54.417 CC lib/util/cpuset.o 00:02:54.417 CC lib/util/crc16.o 00:02:54.417 CC lib/util/crc32.o 00:02:54.417 CC lib/util/crc32c.o 00:02:54.417 CC lib/util/crc32_ieee.o 00:02:54.417 CXX lib/trace_parser/trace.o 00:02:54.417 CC lib/util/crc64.o 00:02:54.417 CC lib/util/dif.o 00:02:54.417 CC lib/ioat/ioat.o 00:02:54.417 CC lib/util/fd.o 00:02:54.417 CC lib/util/fd_group.o 00:02:54.417 CC lib/util/file.o 00:02:54.417 CC lib/util/hexlify.o 00:02:54.417 CC lib/util/iov.o 00:02:54.417 CC lib/util/math.o 00:02:54.417 CC lib/util/net.o 00:02:54.417 CC lib/util/pipe.o 00:02:54.417 CC lib/util/strerror_tls.o 00:02:54.417 CC lib/util/string.o 00:02:54.417 CC lib/util/uuid.o 00:02:54.417 CC lib/util/xor.o 00:02:54.417 CC lib/util/zipf.o 00:02:54.417 CC lib/util/md5.o 00:02:54.677 CC lib/vfio_user/host/vfio_user_pci.o 00:02:54.677 CC lib/vfio_user/host/vfio_user.o 00:02:54.677 LIB libspdk_dma.a 00:02:54.677 SO libspdk_dma.so.5.0 00:02:54.677 LIB libspdk_ioat.a 00:02:54.677 SYMLINK libspdk_dma.so 00:02:54.677 SO libspdk_ioat.so.7.0 00:02:54.936 SYMLINK libspdk_ioat.so 00:02:54.936 LIB libspdk_vfio_user.a 00:02:54.936 SO libspdk_vfio_user.so.5.0 00:02:54.936 SYMLINK libspdk_vfio_user.so 00:02:54.936 LIB libspdk_util.a 00:02:54.936 SO libspdk_util.so.10.1 00:02:55.196 SYMLINK libspdk_util.so 00:02:55.196 LIB libspdk_trace_parser.a 00:02:55.196 SO libspdk_trace_parser.so.6.0 00:02:55.455 SYMLINK libspdk_trace_parser.so 00:02:55.455 CC lib/conf/conf.o 00:02:55.455 CC lib/vmd/vmd.o 00:02:55.455 CC lib/vmd/led.o 00:02:55.455 CC lib/json/json_parse.o 00:02:55.455 CC lib/json/json_util.o 00:02:55.455 CC lib/idxd/idxd.o 00:02:55.455 CC lib/env_dpdk/env.o 00:02:55.455 CC lib/json/json_write.o 00:02:55.455 CC lib/env_dpdk/memory.o 00:02:55.455 CC lib/idxd/idxd_user.o 00:02:55.455 CC lib/idxd/idxd_kernel.o 00:02:55.455 CC lib/env_dpdk/pci.o 00:02:55.455 CC lib/rdma_utils/rdma_utils.o 00:02:55.455 CC lib/env_dpdk/init.o 00:02:55.455 CC lib/env_dpdk/threads.o 00:02:55.455 CC lib/env_dpdk/pci_ioat.o 00:02:55.455 CC lib/env_dpdk/pci_virtio.o 00:02:55.455 CC lib/env_dpdk/pci_vmd.o 00:02:55.455 CC lib/env_dpdk/pci_idxd.o 00:02:55.455 CC lib/env_dpdk/sigbus_handler.o 00:02:55.455 CC lib/env_dpdk/pci_event.o 00:02:55.455 CC lib/env_dpdk/pci_dpdk.o 00:02:55.455 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.455 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.714 LIB libspdk_conf.a 00:02:55.714 SO libspdk_conf.so.6.0 00:02:55.714 LIB libspdk_json.a 00:02:55.714 LIB libspdk_rdma_utils.a 00:02:55.714 SYMLINK libspdk_conf.so 00:02:55.714 SO libspdk_json.so.6.0 00:02:55.714 SO libspdk_rdma_utils.so.1.0 00:02:55.973 SYMLINK libspdk_json.so 00:02:55.973 SYMLINK libspdk_rdma_utils.so 00:02:56.231 LIB libspdk_idxd.a 00:02:56.231 LIB libspdk_vmd.a 00:02:56.231 SO libspdk_idxd.so.12.1 00:02:56.231 SO libspdk_vmd.so.6.0 00:02:56.231 CC lib/jsonrpc/jsonrpc_client.o 00:02:56.231 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.231 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.231 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.231 CC lib/rdma_provider/common.o 00:02:56.231 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.231 SYMLINK libspdk_idxd.so 00:02:56.231 SYMLINK libspdk_vmd.so 00:02:56.490 LIB libspdk_rdma_provider.a 00:02:56.490 SO libspdk_rdma_provider.so.7.0 00:02:56.490 LIB libspdk_jsonrpc.a 00:02:56.490 SO libspdk_jsonrpc.so.6.0 00:02:56.490 SYMLINK libspdk_rdma_provider.so 00:02:56.490 SYMLINK libspdk_jsonrpc.so 00:02:56.749 CC lib/rpc/rpc.o 00:02:57.009 LIB libspdk_env_dpdk.a 00:02:57.009 SO libspdk_env_dpdk.so.15.1 00:02:57.009 LIB libspdk_rpc.a 00:02:57.009 SYMLINK libspdk_env_dpdk.so 00:02:57.009 SO libspdk_rpc.so.6.0 00:02:57.272 SYMLINK libspdk_rpc.so 00:02:57.531 CC lib/keyring/keyring.o 00:02:57.531 CC lib/trace/trace.o 00:02:57.531 CC lib/keyring/keyring_rpc.o 00:02:57.531 CC lib/notify/notify.o 00:02:57.531 CC lib/trace/trace_flags.o 00:02:57.531 CC lib/trace/trace_rpc.o 00:02:57.531 CC lib/notify/notify_rpc.o 00:02:57.531 LIB libspdk_notify.a 00:02:57.531 SO libspdk_notify.so.6.0 00:02:57.790 LIB libspdk_keyring.a 00:02:57.790 LIB libspdk_trace.a 00:02:57.790 SYMLINK libspdk_notify.so 00:02:57.790 SO libspdk_keyring.so.2.0 00:02:57.790 SO libspdk_trace.so.11.0 00:02:57.790 SYMLINK libspdk_keyring.so 00:02:57.790 SYMLINK libspdk_trace.so 00:02:58.115 CC lib/sock/sock.o 00:02:58.115 CC lib/sock/sock_rpc.o 00:02:58.115 CC lib/thread/thread.o 00:02:58.115 CC lib/thread/iobuf.o 00:02:58.376 LIB libspdk_sock.a 00:02:58.635 SO libspdk_sock.so.10.0 00:02:58.635 SYMLINK libspdk_sock.so 00:02:58.894 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.894 CC lib/nvme/nvme_ctrlr.o 00:02:58.894 CC lib/nvme/nvme_fabric.o 00:02:58.894 CC lib/nvme/nvme_ns_cmd.o 00:02:58.894 CC lib/nvme/nvme_ns.o 00:02:58.894 CC lib/nvme/nvme_pcie_common.o 00:02:58.894 CC lib/nvme/nvme_pcie.o 00:02:58.894 CC lib/nvme/nvme_qpair.o 00:02:58.894 CC lib/nvme/nvme.o 00:02:58.894 CC lib/nvme/nvme_quirks.o 00:02:58.894 CC lib/nvme/nvme_transport.o 00:02:58.894 CC lib/nvme/nvme_discovery.o 00:02:58.894 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:58.895 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.895 CC lib/nvme/nvme_tcp.o 00:02:58.895 CC lib/nvme/nvme_opal.o 00:02:58.895 CC lib/nvme/nvme_io_msg.o 00:02:58.895 CC lib/nvme/nvme_poll_group.o 00:02:58.895 CC lib/nvme/nvme_zns.o 00:02:58.895 CC lib/nvme/nvme_stubs.o 00:02:58.895 CC lib/nvme/nvme_auth.o 00:02:58.895 CC lib/nvme/nvme_cuse.o 00:02:58.895 CC lib/nvme/nvme_rdma.o 00:02:59.464 LIB libspdk_thread.a 00:02:59.464 SO libspdk_thread.so.11.0 00:02:59.723 SYMLINK libspdk_thread.so 00:02:59.982 CC lib/blob/blobstore.o 00:02:59.982 CC lib/blob/request.o 00:02:59.982 CC lib/blob/zeroes.o 00:02:59.982 CC lib/fsdev/fsdev.o 00:02:59.982 CC lib/blob/blob_bs_dev.o 00:02:59.982 CC lib/fsdev/fsdev_io.o 00:02:59.982 CC lib/init/json_config.o 00:02:59.982 CC lib/fsdev/fsdev_rpc.o 00:02:59.982 CC lib/init/subsystem.o 00:02:59.982 CC lib/init/subsystem_rpc.o 00:02:59.982 CC lib/init/rpc.o 00:02:59.982 CC lib/virtio/virtio.o 00:02:59.982 CC lib/virtio/virtio_vhost_user.o 00:02:59.982 CC lib/virtio/virtio_vfio_user.o 00:02:59.982 CC lib/accel/accel.o 00:02:59.982 CC lib/virtio/virtio_pci.o 00:02:59.982 CC lib/accel/accel_rpc.o 00:02:59.982 CC lib/accel/accel_sw.o 00:03:00.241 LIB libspdk_init.a 00:03:00.241 SO libspdk_init.so.6.0 00:03:00.241 LIB libspdk_virtio.a 00:03:00.241 SYMLINK libspdk_init.so 00:03:00.241 SO libspdk_virtio.so.7.0 00:03:00.500 SYMLINK libspdk_virtio.so 00:03:00.500 LIB libspdk_fsdev.a 00:03:00.758 SO libspdk_fsdev.so.2.0 00:03:00.758 CC lib/event/app.o 00:03:00.758 CC lib/event/reactor.o 00:03:00.758 CC lib/event/log_rpc.o 00:03:00.758 CC lib/event/app_rpc.o 00:03:00.759 CC lib/event/scheduler_static.o 00:03:00.759 SYMLINK libspdk_fsdev.so 00:03:01.017 LIB libspdk_nvme.a 00:03:01.017 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:01.017 LIB libspdk_accel.a 00:03:01.017 SO libspdk_nvme.so.15.0 00:03:01.017 SO libspdk_accel.so.16.0 00:03:01.017 LIB libspdk_event.a 00:03:01.017 SYMLINK libspdk_accel.so 00:03:01.275 SO libspdk_event.so.14.0 00:03:01.275 SYMLINK libspdk_event.so 00:03:01.275 SYMLINK libspdk_nvme.so 00:03:01.533 CC lib/bdev/bdev.o 00:03:01.533 CC lib/bdev/bdev_rpc.o 00:03:01.533 CC lib/bdev/bdev_zone.o 00:03:01.533 CC lib/bdev/part.o 00:03:01.533 CC lib/bdev/scsi_nvme.o 00:03:01.533 LIB libspdk_fuse_dispatcher.a 00:03:01.533 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.790 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.167 LIB libspdk_blob.a 00:03:03.167 SO libspdk_blob.so.11.0 00:03:03.167 SYMLINK libspdk_blob.so 00:03:03.427 CC lib/blobfs/blobfs.o 00:03:03.427 CC lib/blobfs/tree.o 00:03:03.427 CC lib/lvol/lvol.o 00:03:03.993 LIB libspdk_bdev.a 00:03:03.993 SO libspdk_bdev.so.17.0 00:03:03.993 SYMLINK libspdk_bdev.so 00:03:04.252 LIB libspdk_blobfs.a 00:03:04.252 SO libspdk_blobfs.so.10.0 00:03:04.252 CC lib/scsi/dev.o 00:03:04.252 CC lib/scsi/lun.o 00:03:04.252 CC lib/scsi/scsi_bdev.o 00:03:04.252 CC lib/scsi/port.o 00:03:04.252 CC lib/scsi/scsi.o 00:03:04.252 CC lib/scsi/scsi_pr.o 00:03:04.252 CC lib/scsi/scsi_rpc.o 00:03:04.252 CC lib/scsi/task.o 00:03:04.252 CC lib/ublk/ublk.o 00:03:04.252 CC lib/nbd/nbd.o 00:03:04.252 CC lib/ublk/ublk_rpc.o 00:03:04.252 CC lib/nvmf/ctrlr.o 00:03:04.252 CC lib/nbd/nbd_rpc.o 00:03:04.252 CC lib/nvmf/ctrlr_discovery.o 00:03:04.252 CC lib/nvmf/ctrlr_bdev.o 00:03:04.252 CC lib/ftl/ftl_core.o 00:03:04.252 CC lib/nvmf/subsystem.o 00:03:04.252 CC lib/nvmf/nvmf.o 00:03:04.252 CC lib/ftl/ftl_init.o 00:03:04.252 CC lib/nvmf/nvmf_rpc.o 00:03:04.252 CC lib/nvmf/transport.o 00:03:04.252 CC lib/ftl/ftl_layout.o 00:03:04.252 CC lib/ftl/ftl_debug.o 00:03:04.252 CC lib/nvmf/tcp.o 00:03:04.252 CC lib/ftl/ftl_io.o 00:03:04.252 CC lib/nvmf/mdns_server.o 00:03:04.252 CC lib/nvmf/stubs.o 00:03:04.252 CC lib/ftl/ftl_sb.o 00:03:04.252 CC lib/ftl/ftl_l2p.o 00:03:04.252 CC lib/nvmf/rdma.o 00:03:04.252 CC lib/nvmf/auth.o 00:03:04.252 CC lib/ftl/ftl_l2p_flat.o 00:03:04.252 CC lib/ftl/ftl_nv_cache.o 00:03:04.252 CC lib/ftl/ftl_band.o 00:03:04.252 CC lib/ftl/ftl_band_ops.o 00:03:04.252 CC lib/ftl/ftl_writer.o 00:03:04.252 CC lib/ftl/ftl_rq.o 00:03:04.252 CC lib/ftl/ftl_reloc.o 00:03:04.253 CC lib/ftl/ftl_l2p_cache.o 00:03:04.253 CC lib/ftl/ftl_p2l.o 00:03:04.253 CC lib/ftl/ftl_p2l_log.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.253 CC lib/ftl/utils/ftl_conf.o 00:03:04.253 CC lib/ftl/utils/ftl_md.o 00:03:04.253 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.253 CC lib/ftl/utils/ftl_mempool.o 00:03:04.253 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.253 CC lib/ftl/utils/ftl_property.o 00:03:04.253 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.253 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:04.253 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.253 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.253 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.253 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.253 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.253 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.253 CC lib/ftl/base/ftl_base_dev.o 00:03:04.253 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.253 CC lib/ftl/ftl_trace.o 00:03:04.253 LIB libspdk_lvol.a 00:03:04.253 SYMLINK libspdk_blobfs.so 00:03:04.511 SO libspdk_lvol.so.10.0 00:03:04.511 SYMLINK libspdk_lvol.so 00:03:05.079 LIB libspdk_scsi.a 00:03:05.079 LIB libspdk_nbd.a 00:03:05.079 SO libspdk_scsi.so.9.0 00:03:05.079 SO libspdk_nbd.so.7.0 00:03:05.079 SYMLINK libspdk_nbd.so 00:03:05.079 SYMLINK libspdk_scsi.so 00:03:05.337 LIB libspdk_ublk.a 00:03:05.337 SO libspdk_ublk.so.3.0 00:03:05.337 SYMLINK libspdk_ublk.so 00:03:05.337 LIB libspdk_ftl.a 00:03:05.337 CC lib/vhost/vhost.o 00:03:05.337 CC lib/vhost/vhost_rpc.o 00:03:05.337 CC lib/vhost/vhost_scsi.o 00:03:05.337 CC lib/vhost/vhost_blk.o 00:03:05.337 CC lib/vhost/rte_vhost_user.o 00:03:05.337 CC lib/iscsi/conn.o 00:03:05.337 CC lib/iscsi/init_grp.o 00:03:05.337 CC lib/iscsi/iscsi.o 00:03:05.337 CC lib/iscsi/param.o 00:03:05.337 CC lib/iscsi/portal_grp.o 00:03:05.337 CC lib/iscsi/tgt_node.o 00:03:05.337 CC lib/iscsi/iscsi_subsystem.o 00:03:05.337 CC lib/iscsi/iscsi_rpc.o 00:03:05.337 CC lib/iscsi/task.o 00:03:05.595 SO libspdk_ftl.so.9.0 00:03:05.853 SYMLINK libspdk_ftl.so 00:03:06.422 LIB libspdk_vhost.a 00:03:06.422 SO libspdk_vhost.so.8.0 00:03:06.422 SYMLINK libspdk_vhost.so 00:03:06.680 LIB libspdk_nvmf.a 00:03:06.680 SO libspdk_nvmf.so.20.0 00:03:06.680 LIB libspdk_iscsi.a 00:03:06.939 SO libspdk_iscsi.so.8.0 00:03:06.939 SYMLINK libspdk_nvmf.so 00:03:06.939 SYMLINK libspdk_iscsi.so 00:03:07.506 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.506 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.506 CC module/sock/posix/posix.o 00:03:07.506 CC module/accel/dsa/accel_dsa.o 00:03:07.506 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.506 CC module/accel/ioat/accel_ioat.o 00:03:07.506 CC module/blob/bdev/blob_bdev.o 00:03:07.506 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.506 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.506 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:07.506 CC module/fsdev/aio/fsdev_aio.o 00:03:07.506 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.506 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.506 LIB libspdk_env_dpdk_rpc.a 00:03:07.506 CC module/keyring/file/keyring.o 00:03:07.506 CC module/keyring/file/keyring_rpc.o 00:03:07.506 CC module/accel/error/accel_error.o 00:03:07.506 CC module/accel/iaa/accel_iaa.o 00:03:07.506 CC module/keyring/linux/keyring.o 00:03:07.507 CC module/accel/error/accel_error_rpc.o 00:03:07.507 CC module/accel/iaa/accel_iaa_rpc.o 00:03:07.507 CC module/keyring/linux/keyring_rpc.o 00:03:07.764 SO libspdk_env_dpdk_rpc.so.6.0 00:03:07.764 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.764 LIB libspdk_keyring_file.a 00:03:07.764 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.764 LIB libspdk_keyring_linux.a 00:03:07.764 LIB libspdk_scheduler_gscheduler.a 00:03:07.764 LIB libspdk_accel_ioat.a 00:03:07.764 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.764 LIB libspdk_accel_error.a 00:03:07.764 SO libspdk_keyring_file.so.2.0 00:03:07.764 SO libspdk_accel_ioat.so.6.0 00:03:07.764 LIB libspdk_scheduler_dynamic.a 00:03:07.765 SO libspdk_keyring_linux.so.1.0 00:03:07.765 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.765 SO libspdk_accel_error.so.2.0 00:03:07.765 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.765 LIB libspdk_accel_iaa.a 00:03:07.765 SYMLINK libspdk_keyring_file.so 00:03:07.765 SYMLINK libspdk_keyring_linux.so 00:03:07.765 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.765 SYMLINK libspdk_accel_ioat.so 00:03:07.765 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.765 SO libspdk_accel_iaa.so.3.0 00:03:08.024 LIB libspdk_blob_bdev.a 00:03:08.024 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.024 SYMLINK libspdk_accel_error.so 00:03:08.024 LIB libspdk_accel_dsa.a 00:03:08.024 SO libspdk_blob_bdev.so.11.0 00:03:08.024 SYMLINK libspdk_accel_iaa.so 00:03:08.024 SO libspdk_accel_dsa.so.5.0 00:03:08.024 SYMLINK libspdk_blob_bdev.so 00:03:08.024 SYMLINK libspdk_accel_dsa.so 00:03:08.283 LIB libspdk_fsdev_aio.a 00:03:08.283 SO libspdk_fsdev_aio.so.1.0 00:03:08.283 LIB libspdk_sock_posix.a 00:03:08.283 SO libspdk_sock_posix.so.6.0 00:03:08.541 SYMLINK libspdk_fsdev_aio.so 00:03:08.541 CC module/bdev/error/vbdev_error.o 00:03:08.541 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.541 CC module/bdev/gpt/gpt.o 00:03:08.541 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.541 CC module/bdev/nvme/bdev_nvme.o 00:03:08.541 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.541 CC module/bdev/nvme/nvme_rpc.o 00:03:08.541 CC module/bdev/nvme/vbdev_opal.o 00:03:08.541 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.541 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.541 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.541 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.541 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.541 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.541 CC module/bdev/malloc/bdev_malloc.o 00:03:08.541 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.541 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.541 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.541 CC module/bdev/delay/vbdev_delay.o 00:03:08.541 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.541 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.541 CC module/bdev/split/vbdev_split.o 00:03:08.541 CC module/bdev/null/bdev_null.o 00:03:08.541 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.541 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.541 CC module/bdev/aio/bdev_aio.o 00:03:08.541 CC module/bdev/null/bdev_null_rpc.o 00:03:08.541 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.541 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.541 CC module/bdev/raid/bdev_raid.o 00:03:08.541 CC module/bdev/ftl/bdev_ftl.o 00:03:08.541 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.541 CC module/bdev/raid/raid0.o 00:03:08.541 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.541 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.541 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.541 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.541 CC module/bdev/raid/concat.o 00:03:08.541 CC module/bdev/raid/raid1.o 00:03:08.541 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.541 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.541 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.541 SYMLINK libspdk_sock_posix.so 00:03:08.799 LIB libspdk_blobfs_bdev.a 00:03:08.799 SO libspdk_blobfs_bdev.so.6.0 00:03:08.799 LIB libspdk_bdev_error.a 00:03:08.799 LIB libspdk_bdev_split.a 00:03:08.799 LIB libspdk_bdev_gpt.a 00:03:08.799 SO libspdk_bdev_error.so.6.0 00:03:08.799 SO libspdk_bdev_split.so.6.0 00:03:08.799 SO libspdk_bdev_gpt.so.6.0 00:03:08.799 LIB libspdk_bdev_ftl.a 00:03:08.799 SYMLINK libspdk_blobfs_bdev.so 00:03:08.799 LIB libspdk_bdev_null.a 00:03:08.799 LIB libspdk_bdev_zone_block.a 00:03:08.799 SYMLINK libspdk_bdev_error.so 00:03:08.799 SO libspdk_bdev_ftl.so.6.0 00:03:08.799 LIB libspdk_bdev_passthru.a 00:03:08.799 SYMLINK libspdk_bdev_split.so 00:03:08.799 SYMLINK libspdk_bdev_gpt.so 00:03:08.799 SO libspdk_bdev_null.so.6.0 00:03:08.799 LIB libspdk_bdev_aio.a 00:03:08.799 SO libspdk_bdev_passthru.so.6.0 00:03:08.799 SO libspdk_bdev_zone_block.so.6.0 00:03:08.799 LIB libspdk_bdev_delay.a 00:03:08.799 LIB libspdk_bdev_malloc.a 00:03:08.799 SO libspdk_bdev_aio.so.6.0 00:03:08.799 SYMLINK libspdk_bdev_ftl.so 00:03:09.057 SO libspdk_bdev_malloc.so.6.0 00:03:09.057 LIB libspdk_bdev_iscsi.a 00:03:09.057 SYMLINK libspdk_bdev_null.so 00:03:09.057 SO libspdk_bdev_delay.so.6.0 00:03:09.057 SYMLINK libspdk_bdev_passthru.so 00:03:09.057 SYMLINK libspdk_bdev_zone_block.so 00:03:09.057 SO libspdk_bdev_iscsi.so.6.0 00:03:09.057 SYMLINK libspdk_bdev_aio.so 00:03:09.057 SYMLINK libspdk_bdev_malloc.so 00:03:09.057 SYMLINK libspdk_bdev_delay.so 00:03:09.057 SYMLINK libspdk_bdev_iscsi.so 00:03:09.057 LIB libspdk_bdev_lvol.a 00:03:09.057 LIB libspdk_bdev_virtio.a 00:03:09.057 SO libspdk_bdev_lvol.so.6.0 00:03:09.057 SO libspdk_bdev_virtio.so.6.0 00:03:09.057 SYMLINK libspdk_bdev_lvol.so 00:03:09.316 SYMLINK libspdk_bdev_virtio.so 00:03:09.574 LIB libspdk_bdev_raid.a 00:03:09.574 SO libspdk_bdev_raid.so.6.0 00:03:09.574 SYMLINK libspdk_bdev_raid.so 00:03:10.955 LIB libspdk_bdev_nvme.a 00:03:10.955 SO libspdk_bdev_nvme.so.7.1 00:03:10.955 SYMLINK libspdk_bdev_nvme.so 00:03:11.667 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.667 CC module/event/subsystems/vmd/vmd.o 00:03:11.667 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.667 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.667 CC module/event/subsystems/keyring/keyring.o 00:03:11.667 CC module/event/subsystems/sock/sock.o 00:03:11.667 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.667 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.667 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.926 LIB libspdk_event_keyring.a 00:03:11.926 LIB libspdk_event_vmd.a 00:03:11.926 LIB libspdk_event_iobuf.a 00:03:11.926 LIB libspdk_event_scheduler.a 00:03:11.926 LIB libspdk_event_sock.a 00:03:11.926 LIB libspdk_event_vhost_blk.a 00:03:11.926 LIB libspdk_event_fsdev.a 00:03:11.926 SO libspdk_event_keyring.so.1.0 00:03:11.926 SO libspdk_event_vmd.so.6.0 00:03:11.926 SO libspdk_event_scheduler.so.4.0 00:03:11.926 SO libspdk_event_vhost_blk.so.3.0 00:03:11.926 SO libspdk_event_iobuf.so.3.0 00:03:11.926 SO libspdk_event_sock.so.5.0 00:03:11.926 SO libspdk_event_fsdev.so.1.0 00:03:11.926 SYMLINK libspdk_event_vmd.so 00:03:11.926 SYMLINK libspdk_event_keyring.so 00:03:11.926 SYMLINK libspdk_event_scheduler.so 00:03:11.926 SYMLINK libspdk_event_iobuf.so 00:03:11.926 SYMLINK libspdk_event_vhost_blk.so 00:03:11.926 SYMLINK libspdk_event_fsdev.so 00:03:11.926 SYMLINK libspdk_event_sock.so 00:03:12.189 CC module/event/subsystems/accel/accel.o 00:03:12.449 LIB libspdk_event_accel.a 00:03:12.449 SO libspdk_event_accel.so.6.0 00:03:12.449 SYMLINK libspdk_event_accel.so 00:03:12.708 CC module/event/subsystems/bdev/bdev.o 00:03:12.967 LIB libspdk_event_bdev.a 00:03:12.967 SO libspdk_event_bdev.so.6.0 00:03:12.967 SYMLINK libspdk_event_bdev.so 00:03:13.226 CC module/event/subsystems/scsi/scsi.o 00:03:13.226 CC module/event/subsystems/nbd/nbd.o 00:03:13.226 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.226 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.226 CC module/event/subsystems/ublk/ublk.o 00:03:13.484 LIB libspdk_event_nbd.a 00:03:13.484 LIB libspdk_event_ublk.a 00:03:13.484 LIB libspdk_event_scsi.a 00:03:13.484 SO libspdk_event_nbd.so.6.0 00:03:13.484 SO libspdk_event_ublk.so.3.0 00:03:13.484 SO libspdk_event_scsi.so.6.0 00:03:13.484 LIB libspdk_event_nvmf.a 00:03:13.484 SYMLINK libspdk_event_nbd.so 00:03:13.484 SYMLINK libspdk_event_ublk.so 00:03:13.484 SO libspdk_event_nvmf.so.6.0 00:03:13.484 SYMLINK libspdk_event_scsi.so 00:03:13.743 SYMLINK libspdk_event_nvmf.so 00:03:13.743 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.743 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.002 LIB libspdk_event_vhost_scsi.a 00:03:14.002 LIB libspdk_event_iscsi.a 00:03:14.002 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.002 SO libspdk_event_iscsi.so.6.0 00:03:14.002 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.260 SYMLINK libspdk_event_iscsi.so 00:03:14.260 SO libspdk.so.6.0 00:03:14.260 SYMLINK libspdk.so 00:03:14.836 CXX app/trace/trace.o 00:03:14.836 CC app/spdk_nvme_identify/identify.o 00:03:14.836 CC app/trace_record/trace_record.o 00:03:14.836 CC test/rpc_client/rpc_client_test.o 00:03:14.836 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.836 CC app/spdk_nvme_perf/perf.o 00:03:14.836 CC app/spdk_top/spdk_top.o 00:03:14.836 CC app/spdk_lspci/spdk_lspci.o 00:03:14.836 TEST_HEADER include/spdk/accel_module.h 00:03:14.836 TEST_HEADER include/spdk/accel.h 00:03:14.836 TEST_HEADER include/spdk/assert.h 00:03:14.836 TEST_HEADER include/spdk/barrier.h 00:03:14.836 TEST_HEADER include/spdk/base64.h 00:03:14.836 TEST_HEADER include/spdk/bdev.h 00:03:14.836 TEST_HEADER include/spdk/bdev_module.h 00:03:14.836 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.836 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.836 TEST_HEADER include/spdk/bit_pool.h 00:03:14.836 TEST_HEADER include/spdk/bit_array.h 00:03:14.836 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.836 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.836 TEST_HEADER include/spdk/blob.h 00:03:14.836 TEST_HEADER include/spdk/blobfs.h 00:03:14.836 TEST_HEADER include/spdk/conf.h 00:03:14.836 TEST_HEADER include/spdk/config.h 00:03:14.836 TEST_HEADER include/spdk/crc16.h 00:03:14.836 TEST_HEADER include/spdk/crc32.h 00:03:14.836 TEST_HEADER include/spdk/crc64.h 00:03:14.836 TEST_HEADER include/spdk/cpuset.h 00:03:14.836 TEST_HEADER include/spdk/dif.h 00:03:14.836 TEST_HEADER include/spdk/dma.h 00:03:14.836 TEST_HEADER include/spdk/endian.h 00:03:14.836 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.836 TEST_HEADER include/spdk/env.h 00:03:14.836 TEST_HEADER include/spdk/fd_group.h 00:03:14.836 TEST_HEADER include/spdk/event.h 00:03:14.836 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.836 TEST_HEADER include/spdk/fd.h 00:03:14.836 TEST_HEADER include/spdk/file.h 00:03:14.836 TEST_HEADER include/spdk/fsdev.h 00:03:14.836 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.836 TEST_HEADER include/spdk/ftl.h 00:03:14.837 CC app/nvmf_tgt/nvmf_main.o 00:03:14.837 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.837 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.837 TEST_HEADER include/spdk/hexlify.h 00:03:14.837 TEST_HEADER include/spdk/idxd.h 00:03:14.837 TEST_HEADER include/spdk/histogram_data.h 00:03:14.837 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.837 TEST_HEADER include/spdk/init.h 00:03:14.837 TEST_HEADER include/spdk/ioat.h 00:03:14.837 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.837 TEST_HEADER include/spdk/json.h 00:03:14.837 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.837 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.837 CC app/spdk_dd/spdk_dd.o 00:03:14.837 TEST_HEADER include/spdk/keyring.h 00:03:14.837 TEST_HEADER include/spdk/keyring_module.h 00:03:14.837 TEST_HEADER include/spdk/likely.h 00:03:14.837 TEST_HEADER include/spdk/log.h 00:03:14.837 TEST_HEADER include/spdk/lvol.h 00:03:14.837 TEST_HEADER include/spdk/md5.h 00:03:14.837 TEST_HEADER include/spdk/memory.h 00:03:14.837 TEST_HEADER include/spdk/mmio.h 00:03:14.837 TEST_HEADER include/spdk/nbd.h 00:03:14.837 TEST_HEADER include/spdk/net.h 00:03:14.837 TEST_HEADER include/spdk/nvme.h 00:03:14.837 TEST_HEADER include/spdk/notify.h 00:03:14.837 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.837 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.837 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.837 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.837 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.837 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.837 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.837 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.837 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.837 TEST_HEADER include/spdk/nvmf.h 00:03:14.837 TEST_HEADER include/spdk/opal.h 00:03:14.837 TEST_HEADER include/spdk/pci_ids.h 00:03:14.837 TEST_HEADER include/spdk/opal_spec.h 00:03:14.837 TEST_HEADER include/spdk/pipe.h 00:03:14.837 TEST_HEADER include/spdk/rpc.h 00:03:14.837 TEST_HEADER include/spdk/reduce.h 00:03:14.837 TEST_HEADER include/spdk/queue.h 00:03:14.837 TEST_HEADER include/spdk/scheduler.h 00:03:14.837 TEST_HEADER include/spdk/scsi.h 00:03:14.837 CC app/spdk_tgt/spdk_tgt.o 00:03:14.837 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.837 TEST_HEADER include/spdk/sock.h 00:03:14.837 TEST_HEADER include/spdk/stdinc.h 00:03:14.837 TEST_HEADER include/spdk/string.h 00:03:14.837 TEST_HEADER include/spdk/trace.h 00:03:14.837 TEST_HEADER include/spdk/trace_parser.h 00:03:14.837 TEST_HEADER include/spdk/tree.h 00:03:14.837 TEST_HEADER include/spdk/thread.h 00:03:14.837 TEST_HEADER include/spdk/util.h 00:03:14.837 TEST_HEADER include/spdk/ublk.h 00:03:14.837 TEST_HEADER include/spdk/version.h 00:03:14.837 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.837 TEST_HEADER include/spdk/uuid.h 00:03:14.837 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.837 TEST_HEADER include/spdk/vhost.h 00:03:14.837 TEST_HEADER include/spdk/vmd.h 00:03:14.837 TEST_HEADER include/spdk/zipf.h 00:03:14.837 TEST_HEADER include/spdk/xor.h 00:03:14.837 CXX test/cpp_headers/accel_module.o 00:03:14.837 CXX test/cpp_headers/accel.o 00:03:14.837 CXX test/cpp_headers/assert.o 00:03:14.837 CXX test/cpp_headers/barrier.o 00:03:14.837 CXX test/cpp_headers/base64.o 00:03:14.837 CXX test/cpp_headers/bdev.o 00:03:14.837 CXX test/cpp_headers/bit_array.o 00:03:14.837 CXX test/cpp_headers/bdev_zone.o 00:03:14.837 CXX test/cpp_headers/bdev_module.o 00:03:14.837 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.837 CXX test/cpp_headers/bit_pool.o 00:03:14.837 CXX test/cpp_headers/blob_bdev.o 00:03:14.837 CXX test/cpp_headers/blobfs.o 00:03:14.837 CXX test/cpp_headers/blob.o 00:03:14.837 CXX test/cpp_headers/conf.o 00:03:14.837 CXX test/cpp_headers/cpuset.o 00:03:14.837 CXX test/cpp_headers/config.o 00:03:14.837 CXX test/cpp_headers/crc16.o 00:03:14.837 CXX test/cpp_headers/crc32.o 00:03:14.837 CXX test/cpp_headers/crc64.o 00:03:14.837 CXX test/cpp_headers/dif.o 00:03:14.837 CXX test/cpp_headers/dma.o 00:03:14.837 CXX test/cpp_headers/endian.o 00:03:14.837 CXX test/cpp_headers/env_dpdk.o 00:03:14.837 CXX test/cpp_headers/event.o 00:03:14.837 CXX test/cpp_headers/env.o 00:03:14.837 CXX test/cpp_headers/fd_group.o 00:03:14.837 CXX test/cpp_headers/fd.o 00:03:14.837 CXX test/cpp_headers/fsdev.o 00:03:14.837 CXX test/cpp_headers/file.o 00:03:14.837 CXX test/cpp_headers/fsdev_module.o 00:03:14.837 CXX test/cpp_headers/ftl.o 00:03:14.837 CXX test/cpp_headers/gpt_spec.o 00:03:14.837 CXX test/cpp_headers/fuse_dispatcher.o 00:03:14.837 CXX test/cpp_headers/histogram_data.o 00:03:14.837 CXX test/cpp_headers/hexlify.o 00:03:14.837 CXX test/cpp_headers/idxd.o 00:03:14.837 CXX test/cpp_headers/idxd_spec.o 00:03:14.837 CXX test/cpp_headers/init.o 00:03:14.837 CXX test/cpp_headers/ioat.o 00:03:14.837 CXX test/cpp_headers/ioat_spec.o 00:03:14.837 CXX test/cpp_headers/iscsi_spec.o 00:03:14.837 CXX test/cpp_headers/json.o 00:03:14.837 CXX test/cpp_headers/jsonrpc.o 00:03:14.837 CXX test/cpp_headers/keyring.o 00:03:14.837 CXX test/cpp_headers/keyring_module.o 00:03:14.837 CXX test/cpp_headers/likely.o 00:03:14.837 CC examples/util/zipf/zipf.o 00:03:14.837 CXX test/cpp_headers/log.o 00:03:14.837 CXX test/cpp_headers/lvol.o 00:03:14.837 CXX test/cpp_headers/memory.o 00:03:14.837 CXX test/cpp_headers/md5.o 00:03:14.837 CXX test/cpp_headers/mmio.o 00:03:14.837 CXX test/cpp_headers/nbd.o 00:03:14.837 CXX test/cpp_headers/net.o 00:03:14.837 CXX test/cpp_headers/nvme.o 00:03:14.837 CXX test/cpp_headers/notify.o 00:03:14.837 CXX test/cpp_headers/nvme_intel.o 00:03:14.837 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.837 CXX test/cpp_headers/nvme_spec.o 00:03:14.837 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.837 CXX test/cpp_headers/nvme_zns.o 00:03:14.837 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.837 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.837 CXX test/cpp_headers/nvmf.o 00:03:14.837 CXX test/cpp_headers/nvmf_spec.o 00:03:14.837 CXX test/cpp_headers/nvmf_transport.o 00:03:14.837 CXX test/cpp_headers/opal.o 00:03:14.837 CC examples/ioat/perf/perf.o 00:03:14.837 CC test/app/jsoncat/jsoncat.o 00:03:14.837 CC examples/ioat/verify/verify.o 00:03:14.837 CC test/env/memory/memory_ut.o 00:03:14.837 CC test/env/vtophys/vtophys.o 00:03:14.837 CC test/app/stub/stub.o 00:03:14.837 CC test/app/histogram_perf/histogram_perf.o 00:03:14.837 CC test/thread/poller_perf/poller_perf.o 00:03:14.837 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.837 CC test/env/pci/pci_ut.o 00:03:14.837 CC app/fio/nvme/fio_plugin.o 00:03:14.837 CC app/fio/bdev/fio_plugin.o 00:03:14.837 CC test/app/bdev_svc/bdev_svc.o 00:03:14.837 CC test/dma/test_dma/test_dma.o 00:03:15.109 LINK spdk_lspci 00:03:15.109 LINK interrupt_tgt 00:03:15.109 LINK spdk_nvme_discover 00:03:15.109 LINK nvmf_tgt 00:03:15.109 LINK iscsi_tgt 00:03:15.109 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.376 LINK rpc_client_test 00:03:15.376 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.376 LINK spdk_tgt 00:03:15.376 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.376 LINK zipf 00:03:15.376 LINK jsoncat 00:03:15.376 CXX test/cpp_headers/opal_spec.o 00:03:15.376 LINK env_dpdk_post_init 00:03:15.376 CXX test/cpp_headers/pci_ids.o 00:03:15.376 CXX test/cpp_headers/queue.o 00:03:15.376 CXX test/cpp_headers/reduce.o 00:03:15.376 CXX test/cpp_headers/pipe.o 00:03:15.376 LINK stub 00:03:15.376 CXX test/cpp_headers/scheduler.o 00:03:15.376 CXX test/cpp_headers/rpc.o 00:03:15.376 CXX test/cpp_headers/scsi.o 00:03:15.376 CXX test/cpp_headers/scsi_spec.o 00:03:15.376 CXX test/cpp_headers/sock.o 00:03:15.376 CXX test/cpp_headers/stdinc.o 00:03:15.376 CXX test/cpp_headers/string.o 00:03:15.376 CXX test/cpp_headers/thread.o 00:03:15.376 CXX test/cpp_headers/trace.o 00:03:15.376 CXX test/cpp_headers/tree.o 00:03:15.376 CXX test/cpp_headers/trace_parser.o 00:03:15.376 CXX test/cpp_headers/ublk.o 00:03:15.376 CXX test/cpp_headers/util.o 00:03:15.376 CXX test/cpp_headers/uuid.o 00:03:15.376 CXX test/cpp_headers/version.o 00:03:15.376 CXX test/cpp_headers/vfio_user_spec.o 00:03:15.376 CXX test/cpp_headers/vfio_user_pci.o 00:03:15.376 CXX test/cpp_headers/vhost.o 00:03:15.376 CXX test/cpp_headers/vmd.o 00:03:15.376 LINK verify 00:03:15.376 CXX test/cpp_headers/xor.o 00:03:15.636 CXX test/cpp_headers/zipf.o 00:03:15.636 LINK histogram_perf 00:03:15.636 LINK poller_perf 00:03:15.636 LINK vtophys 00:03:15.636 LINK spdk_trace_record 00:03:15.636 LINK bdev_svc 00:03:15.636 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.636 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.636 LINK spdk_trace 00:03:15.636 LINK ioat_perf 00:03:15.636 LINK spdk_dd 00:03:15.894 LINK pci_ut 00:03:15.894 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.894 CC examples/idxd/perf/perf.o 00:03:15.894 CC examples/sock/hello_world/hello_sock.o 00:03:15.894 LINK spdk_bdev 00:03:15.894 CC test/event/reactor/reactor.o 00:03:15.894 CC examples/vmd/led/led.o 00:03:15.894 CC test/event/reactor_perf/reactor_perf.o 00:03:15.894 CC test/event/app_repeat/app_repeat.o 00:03:15.894 CC test/event/event_perf/event_perf.o 00:03:15.894 CC test/event/scheduler/scheduler.o 00:03:15.894 CC examples/thread/thread/thread_ex.o 00:03:15.894 LINK test_dma 00:03:15.894 CC app/vhost/vhost.o 00:03:15.894 LINK nvme_fuzz 00:03:15.894 LINK spdk_nvme 00:03:16.152 LINK lsvmd 00:03:16.152 LINK reactor 00:03:16.152 LINK vhost_fuzz 00:03:16.152 LINK led 00:03:16.152 LINK mem_callbacks 00:03:16.152 LINK app_repeat 00:03:16.152 LINK reactor_perf 00:03:16.152 LINK event_perf 00:03:16.152 LINK spdk_nvme_identify 00:03:16.152 LINK vhost 00:03:16.152 LINK spdk_top 00:03:16.152 LINK hello_sock 00:03:16.152 LINK scheduler 00:03:16.152 LINK spdk_nvme_perf 00:03:16.152 LINK thread 00:03:16.411 LINK idxd_perf 00:03:16.411 CC test/nvme/connect_stress/connect_stress.o 00:03:16.411 CC test/nvme/simple_copy/simple_copy.o 00:03:16.411 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.411 CC test/nvme/reset/reset.o 00:03:16.411 CC test/nvme/sgl/sgl.o 00:03:16.411 CC test/nvme/fdp/fdp.o 00:03:16.411 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.411 CC test/nvme/overhead/overhead.o 00:03:16.411 CC test/nvme/aer/aer.o 00:03:16.411 CC test/nvme/boot_partition/boot_partition.o 00:03:16.411 CC test/nvme/e2edp/nvme_dp.o 00:03:16.411 CC test/nvme/err_injection/err_injection.o 00:03:16.411 CC test/nvme/cuse/cuse.o 00:03:16.411 CC test/nvme/compliance/nvme_compliance.o 00:03:16.411 CC test/nvme/reserve/reserve.o 00:03:16.411 CC test/nvme/startup/startup.o 00:03:16.411 CC test/blobfs/mkfs/mkfs.o 00:03:16.411 CC test/accel/dif/dif.o 00:03:16.668 CC test/lvol/esnap/esnap.o 00:03:16.668 LINK memory_ut 00:03:16.668 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.668 CC examples/nvme/abort/abort.o 00:03:16.668 CC examples/nvme/arbitration/arbitration.o 00:03:16.668 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.668 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.668 CC examples/nvme/reconnect/reconnect.o 00:03:16.668 CC examples/nvme/hotplug/hotplug.o 00:03:16.668 CC examples/nvme/hello_world/hello_world.o 00:03:16.668 LINK startup 00:03:16.668 LINK doorbell_aers 00:03:16.668 LINK boot_partition 00:03:16.668 LINK connect_stress 00:03:16.668 LINK err_injection 00:03:16.668 LINK fused_ordering 00:03:16.668 CC examples/accel/perf/accel_perf.o 00:03:16.668 LINK mkfs 00:03:16.668 LINK reserve 00:03:16.668 CC examples/blob/cli/blobcli.o 00:03:16.668 LINK simple_copy 00:03:16.668 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:16.668 CC examples/blob/hello_world/hello_blob.o 00:03:16.926 LINK reset 00:03:16.926 LINK sgl 00:03:16.926 LINK nvme_dp 00:03:16.926 LINK aer 00:03:16.926 LINK overhead 00:03:16.926 LINK pmr_persistence 00:03:16.926 LINK fdp 00:03:16.926 LINK cmb_copy 00:03:16.926 LINK nvme_compliance 00:03:16.926 LINK hello_world 00:03:16.926 LINK hotplug 00:03:16.926 LINK arbitration 00:03:16.926 LINK hello_blob 00:03:17.185 LINK reconnect 00:03:17.185 LINK abort 00:03:17.185 LINK hello_fsdev 00:03:17.185 LINK nvme_manage 00:03:17.185 LINK accel_perf 00:03:17.185 LINK blobcli 00:03:17.185 LINK dif 00:03:17.444 LINK iscsi_fuzz 00:03:17.702 LINK cuse 00:03:17.702 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.702 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.702 CC test/bdev/bdevio/bdevio.o 00:03:17.959 LINK hello_bdev 00:03:18.218 LINK bdevio 00:03:18.476 LINK bdevperf 00:03:19.043 CC examples/nvmf/nvmf/nvmf.o 00:03:19.302 LINK nvmf 00:03:21.205 LINK esnap 00:03:21.465 00:03:21.465 real 0m59.211s 00:03:21.465 user 8m43.306s 00:03:21.465 sys 3m35.202s 00:03:21.465 15:07:49 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:21.465 15:07:49 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.465 ************************************ 00:03:21.465 END TEST make 00:03:21.465 ************************************ 00:03:21.724 15:07:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.724 15:07:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.724 15:07:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.724 15:07:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.724 15:07:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.724 15:07:49 -- pm/common@44 -- $ pid=3557103 00:03:21.724 15:07:49 -- pm/common@50 -- $ kill -TERM 3557103 00:03:21.724 15:07:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.724 15:07:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.724 15:07:49 -- pm/common@44 -- $ pid=3557104 00:03:21.724 15:07:49 -- pm/common@50 -- $ kill -TERM 3557104 00:03:21.724 15:07:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.724 15:07:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:21.724 15:07:49 -- pm/common@44 -- $ pid=3557107 00:03:21.724 15:07:49 -- pm/common@50 -- $ kill -TERM 3557107 00:03:21.724 15:07:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.724 15:07:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:21.724 15:07:49 -- pm/common@44 -- $ pid=3557132 00:03:21.724 15:07:49 -- pm/common@50 -- $ sudo -E kill -TERM 3557132 00:03:21.724 15:07:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.724 15:07:49 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:21.724 15:07:49 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:21.724 15:07:49 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:21.724 15:07:49 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:21.724 15:07:49 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:21.724 15:07:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.724 15:07:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.725 15:07:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.725 15:07:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.725 15:07:49 -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.725 15:07:49 -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.725 15:07:49 -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.725 15:07:49 -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.725 15:07:49 -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.725 15:07:49 -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.725 15:07:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.725 15:07:49 -- scripts/common.sh@344 -- # case "$op" in 00:03:21.725 15:07:49 -- scripts/common.sh@345 -- # : 1 00:03:21.725 15:07:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.725 15:07:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.725 15:07:49 -- scripts/common.sh@365 -- # decimal 1 00:03:21.725 15:07:49 -- scripts/common.sh@353 -- # local d=1 00:03:21.725 15:07:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.725 15:07:49 -- scripts/common.sh@355 -- # echo 1 00:03:21.725 15:07:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.725 15:07:49 -- scripts/common.sh@366 -- # decimal 2 00:03:21.725 15:07:49 -- scripts/common.sh@353 -- # local d=2 00:03:21.725 15:07:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.725 15:07:49 -- scripts/common.sh@355 -- # echo 2 00:03:21.725 15:07:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.725 15:07:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.725 15:07:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.725 15:07:49 -- scripts/common.sh@368 -- # return 0 00:03:21.725 15:07:49 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.725 15:07:49 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.725 --rc genhtml_branch_coverage=1 00:03:21.725 --rc genhtml_function_coverage=1 00:03:21.725 --rc genhtml_legend=1 00:03:21.725 --rc geninfo_all_blocks=1 00:03:21.725 --rc geninfo_unexecuted_blocks=1 00:03:21.725 00:03:21.725 ' 00:03:21.725 15:07:49 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.725 --rc genhtml_branch_coverage=1 00:03:21.725 --rc genhtml_function_coverage=1 00:03:21.725 --rc genhtml_legend=1 00:03:21.725 --rc geninfo_all_blocks=1 00:03:21.725 --rc geninfo_unexecuted_blocks=1 00:03:21.725 00:03:21.725 ' 00:03:21.725 15:07:49 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.725 --rc genhtml_branch_coverage=1 00:03:21.725 --rc genhtml_function_coverage=1 00:03:21.725 --rc genhtml_legend=1 00:03:21.725 --rc geninfo_all_blocks=1 00:03:21.725 --rc geninfo_unexecuted_blocks=1 00:03:21.725 00:03:21.725 ' 00:03:21.725 15:07:49 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.725 --rc genhtml_branch_coverage=1 00:03:21.725 --rc genhtml_function_coverage=1 00:03:21.725 --rc genhtml_legend=1 00:03:21.725 --rc geninfo_all_blocks=1 00:03:21.725 --rc geninfo_unexecuted_blocks=1 00:03:21.725 00:03:21.725 ' 00:03:21.725 15:07:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:21.725 15:07:49 -- nvmf/common.sh@7 -- # uname -s 00:03:21.725 15:07:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.725 15:07:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.725 15:07:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.725 15:07:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.725 15:07:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.725 15:07:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.725 15:07:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.725 15:07:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.725 15:07:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.725 15:07:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.725 15:07:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:21.725 15:07:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:21.725 15:07:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.725 15:07:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.725 15:07:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:21.725 15:07:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.725 15:07:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:21.725 15:07:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:21.725 15:07:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.725 15:07:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.725 15:07:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.725 15:07:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.725 15:07:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.725 15:07:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.725 15:07:49 -- paths/export.sh@5 -- # export PATH 00:03:21.725 15:07:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.725 15:07:49 -- nvmf/common.sh@51 -- # : 0 00:03:21.725 15:07:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:21.725 15:07:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:21.725 15:07:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.725 15:07:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.725 15:07:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.725 15:07:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:21.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:21.725 15:07:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:21.725 15:07:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:21.725 15:07:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:21.725 15:07:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.725 15:07:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.725 15:07:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.725 15:07:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.725 15:07:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.984 15:07:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.984 15:07:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.984 15:07:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.984 15:07:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.984 15:07:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.984 15:07:49 -- spdk/autotest.sh@48 -- # udevadm_pid=3619582 00:03:21.984 15:07:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.984 15:07:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.984 15:07:49 -- pm/common@17 -- # local monitor 00:03:21.984 15:07:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.984 15:07:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.984 15:07:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.984 15:07:49 -- pm/common@21 -- # date +%s 00:03:21.984 15:07:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.984 15:07:49 -- pm/common@21 -- # date +%s 00:03:21.984 15:07:49 -- pm/common@25 -- # sleep 1 00:03:21.984 15:07:49 -- pm/common@21 -- # date +%s 00:03:21.984 15:07:49 -- pm/common@21 -- # date +%s 00:03:21.984 15:07:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902069 00:03:21.984 15:07:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902069 00:03:21.984 15:07:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902069 00:03:21.984 15:07:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730902069 00:03:21.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902069_collect-cpu-load.pm.log 00:03:21.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902069_collect-vmstat.pm.log 00:03:21.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902069_collect-cpu-temp.pm.log 00:03:21.984 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730902069_collect-bmc-pm.bmc.pm.log 00:03:22.920 15:07:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.920 15:07:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.920 15:07:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:22.920 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:22.920 15:07:50 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.920 15:07:50 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:22.920 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:22.920 15:07:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:22.920 15:07:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.920 15:07:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.920 15:07:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:22.920 15:07:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.920 15:07:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.920 15:07:50 -- common/autotest_common.sh@1455 -- # uname 00:03:22.920 15:07:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:22.920 15:07:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.920 15:07:50 -- common/autotest_common.sh@1475 -- # uname 00:03:22.920 15:07:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:22.920 15:07:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.920 15:07:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.920 lcov: LCOV version 1.15 00:03:22.920 15:07:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:41.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.006 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.569 15:08:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.569 15:08:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.569 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:03:47.569 15:08:14 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.569 15:08:14 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.473 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:49.473 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:49.473 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:49.732 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:49.732 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:49.732 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:49.732 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:49.732 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:49.732 15:08:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:49.732 15:08:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:49.732 15:08:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:49.732 15:08:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:49.732 15:08:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:49.732 15:08:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:49.732 15:08:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:49.732 15:08:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.732 15:08:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:49.732 15:08:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:49.732 15:08:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:49.732 15:08:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:49.732 15:08:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:49.732 15:08:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:49.732 15:08:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:49.732 No valid GPT data, bailing 00:03:49.732 15:08:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.732 15:08:17 -- scripts/common.sh@394 -- # pt= 00:03:49.732 15:08:17 -- scripts/common.sh@395 -- # return 1 00:03:49.732 15:08:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:49.732 1+0 records in 00:03:49.732 1+0 records out 00:03:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474046 s, 221 MB/s 00:03:49.732 15:08:17 -- spdk/autotest.sh@105 -- # sync 00:03:49.732 15:08:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:49.732 15:08:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:49.732 15:08:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.301 15:08:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.301 15:08:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.301 15:08:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.301 15:08:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:58.207 Hugepages 00:03:58.207 node hugesize free / total 00:03:58.207 node0 1048576kB 0 / 0 00:03:58.207 node0 2048kB 0 / 0 00:03:58.207 node1 1048576kB 0 / 0 00:03:58.207 node1 2048kB 0 / 0 00:03:58.207 00:03:58.207 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.207 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:58.207 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:58.207 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:58.207 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:58.207 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:58.207 15:08:25 -- spdk/autotest.sh@117 -- # uname -s 00:03:58.207 15:08:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:58.207 15:08:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:58.207 15:08:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.497 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.497 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.434 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.693 15:08:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:03.630 15:08:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:03.630 15:08:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:03.630 15:08:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.630 15:08:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:03.630 15:08:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.630 15:08:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.630 15:08:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.630 15:08:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.630 15:08:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.889 15:08:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:03.889 15:08:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:03.889 15:08:31 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.425 Waiting for block devices as requested 00:04:06.425 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:06.685 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:06.685 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:06.977 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:06.977 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.977 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:07.281 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:07.281 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:07.281 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:07.281 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:07.281 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:07.564 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:07.564 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:07.564 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:07.564 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:07.824 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:07.824 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:07.824 15:08:35 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:07.824 15:08:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:07.824 15:08:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:07.824 15:08:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:07.824 15:08:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:07.824 15:08:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:08.083 15:08:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:08.083 15:08:35 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:08.083 15:08:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:08.083 15:08:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:08.083 15:08:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:08.083 15:08:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:08.083 15:08:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:08.083 15:08:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:08.083 15:08:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:08.083 15:08:35 -- common/autotest_common.sh@1541 -- # continue 00:04:08.083 15:08:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:08.083 15:08:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.083 15:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:08.083 15:08:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:08.083 15:08:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.083 15:08:35 -- common/autotest_common.sh@10 -- # set +x 00:04:08.083 15:08:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:11.374 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.374 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.311 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.570 15:08:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:12.570 15:08:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.570 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:12.570 15:08:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:12.570 15:08:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:12.570 15:08:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.570 15:08:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:12.571 15:08:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:12.571 15:08:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:12.571 15:08:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:12.571 15:08:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:12.571 15:08:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:12.571 15:08:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:12.571 15:08:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.571 15:08:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.571 15:08:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:12.571 15:08:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:12.571 15:08:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:12.571 15:08:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:12.571 15:08:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:12.571 15:08:40 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:12.571 15:08:40 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:12.571 15:08:40 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:12.571 15:08:40 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:12.571 15:08:40 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:12.571 15:08:40 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:12.571 15:08:40 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3633824 00:04:12.571 15:08:40 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.571 15:08:40 -- common/autotest_common.sh@1583 -- # waitforlisten 3633824 00:04:12.571 15:08:40 -- common/autotest_common.sh@833 -- # '[' -z 3633824 ']' 00:04:12.571 15:08:40 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.571 15:08:40 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.571 15:08:40 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.571 15:08:40 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.571 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:12.830 [2024-11-06 15:08:40.284237] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:12.830 [2024-11-06 15:08:40.284326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633824 ] 00:04:12.830 [2024-11-06 15:08:40.409507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.089 [2024-11-06 15:08:40.517597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.026 15:08:41 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.026 15:08:41 -- common/autotest_common.sh@866 -- # return 0 00:04:14.026 15:08:41 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:14.026 15:08:41 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:14.026 15:08:41 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:17.311 nvme0n1 00:04:17.311 15:08:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:17.311 [2024-11-06 15:08:44.551085] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:17.311 request: 00:04:17.311 { 00:04:17.311 "nvme_ctrlr_name": "nvme0", 00:04:17.311 "password": "test", 00:04:17.311 "method": "bdev_nvme_opal_revert", 00:04:17.311 "req_id": 1 00:04:17.311 } 00:04:17.311 Got JSON-RPC error response 00:04:17.311 response: 00:04:17.311 { 00:04:17.311 "code": -32602, 00:04:17.311 "message": "Invalid parameters" 00:04:17.311 } 00:04:17.311 15:08:44 -- common/autotest_common.sh@1589 -- # true 00:04:17.311 15:08:44 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:17.311 15:08:44 -- common/autotest_common.sh@1593 -- # killprocess 3633824 00:04:17.311 15:08:44 -- common/autotest_common.sh@952 -- # '[' -z 3633824 ']' 00:04:17.311 15:08:44 -- common/autotest_common.sh@956 -- # kill -0 3633824 00:04:17.311 15:08:44 -- common/autotest_common.sh@957 -- # uname 00:04:17.311 15:08:44 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.311 15:08:44 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3633824 00:04:17.311 15:08:44 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:17.311 15:08:44 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:17.311 15:08:44 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3633824' 00:04:17.311 killing process with pid 3633824 00:04:17.311 15:08:44 -- common/autotest_common.sh@971 -- # kill 3633824 00:04:17.311 15:08:44 -- common/autotest_common.sh@976 -- # wait 3633824 00:04:21.503 15:08:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:21.503 15:08:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:21.503 15:08:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.503 15:08:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:21.503 15:08:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:21.503 15:08:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.503 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:04:21.503 15:08:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:21.503 15:08:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:21.503 15:08:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.503 15:08:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.503 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:04:21.503 ************************************ 00:04:21.503 START TEST env 00:04:21.503 ************************************ 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:21.503 * Looking for test storage... 00:04:21.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.503 15:08:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.503 15:08:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.503 15:08:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.503 15:08:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.503 15:08:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.503 15:08:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.503 15:08:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.503 15:08:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.503 15:08:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.503 15:08:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.503 15:08:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.503 15:08:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:21.503 15:08:48 env -- scripts/common.sh@345 -- # : 1 00:04:21.503 15:08:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.503 15:08:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.503 15:08:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:21.503 15:08:48 env -- scripts/common.sh@353 -- # local d=1 00:04:21.503 15:08:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.503 15:08:48 env -- scripts/common.sh@355 -- # echo 1 00:04:21.503 15:08:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.503 15:08:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:21.503 15:08:48 env -- scripts/common.sh@353 -- # local d=2 00:04:21.503 15:08:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.503 15:08:48 env -- scripts/common.sh@355 -- # echo 2 00:04:21.503 15:08:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.503 15:08:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.503 15:08:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.503 15:08:48 env -- scripts/common.sh@368 -- # return 0 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.503 --rc genhtml_branch_coverage=1 00:04:21.503 --rc genhtml_function_coverage=1 00:04:21.503 --rc genhtml_legend=1 00:04:21.503 --rc geninfo_all_blocks=1 00:04:21.503 --rc geninfo_unexecuted_blocks=1 00:04:21.503 00:04:21.503 ' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.503 --rc genhtml_branch_coverage=1 00:04:21.503 --rc genhtml_function_coverage=1 00:04:21.503 --rc genhtml_legend=1 00:04:21.503 --rc geninfo_all_blocks=1 00:04:21.503 --rc geninfo_unexecuted_blocks=1 00:04:21.503 00:04:21.503 ' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.503 --rc genhtml_branch_coverage=1 00:04:21.503 --rc genhtml_function_coverage=1 00:04:21.503 --rc genhtml_legend=1 00:04:21.503 --rc geninfo_all_blocks=1 00:04:21.503 --rc geninfo_unexecuted_blocks=1 00:04:21.503 00:04:21.503 ' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.503 --rc genhtml_branch_coverage=1 00:04:21.503 --rc genhtml_function_coverage=1 00:04:21.503 --rc genhtml_legend=1 00:04:21.503 --rc geninfo_all_blocks=1 00:04:21.503 --rc geninfo_unexecuted_blocks=1 00:04:21.503 00:04:21.503 ' 00:04:21.503 15:08:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.503 15:08:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.503 15:08:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.503 ************************************ 00:04:21.503 START TEST env_memory 00:04:21.503 ************************************ 00:04:21.503 15:08:48 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.503 00:04:21.503 00:04:21.503 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.503 http://cunit.sourceforge.net/ 00:04:21.503 00:04:21.503 00:04:21.503 Suite: memory 00:04:21.503 Test: alloc and free memory map ...[2024-11-06 15:08:48.997365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:21.503 passed 00:04:21.503 Test: mem map translation ...[2024-11-06 15:08:49.037593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.503 [2024-11-06 15:08:49.037617] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.503 [2024-11-06 15:08:49.037665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.503 [2024-11-06 15:08:49.037680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.503 passed 00:04:21.503 Test: mem map registration ...[2024-11-06 15:08:49.099182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:21.504 [2024-11-06 15:08:49.099208] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:21.504 passed 00:04:21.763 Test: mem map adjacent registrations ...passed 00:04:21.763 00:04:21.763 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.763 suites 1 1 n/a 0 0 00:04:21.763 tests 4 4 4 0 0 00:04:21.763 asserts 152 152 152 0 n/a 00:04:21.763 00:04:21.763 Elapsed time = 0.226 seconds 00:04:21.763 00:04:21.763 real 0m0.260s 00:04:21.763 user 0m0.231s 00:04:21.763 sys 0m0.027s 00:04:21.763 15:08:49 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.763 15:08:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:21.763 ************************************ 00:04:21.763 END TEST env_memory 00:04:21.763 ************************************ 00:04:21.763 15:08:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.763 15:08:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.763 15:08:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.763 15:08:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.763 ************************************ 00:04:21.763 START TEST env_vtophys 00:04:21.763 ************************************ 00:04:21.763 15:08:49 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.763 EAL: lib.eal log level changed from notice to debug 00:04:21.763 EAL: Detected lcore 0 as core 0 on socket 0 00:04:21.763 EAL: Detected lcore 1 as core 1 on socket 0 00:04:21.763 EAL: Detected lcore 2 as core 2 on socket 0 00:04:21.763 EAL: Detected lcore 3 as core 3 on socket 0 00:04:21.763 EAL: Detected lcore 4 as core 4 on socket 0 00:04:21.763 EAL: Detected lcore 5 as core 5 on socket 0 00:04:21.763 EAL: Detected lcore 6 as core 6 on socket 0 00:04:21.763 EAL: Detected lcore 7 as core 8 on socket 0 00:04:21.763 EAL: Detected lcore 8 as core 9 on socket 0 00:04:21.763 EAL: Detected lcore 9 as core 10 on socket 0 00:04:21.763 EAL: Detected lcore 10 as core 11 on socket 0 00:04:21.763 EAL: Detected lcore 11 as core 12 on socket 0 00:04:21.763 EAL: Detected lcore 12 as core 13 on socket 0 00:04:21.763 EAL: Detected lcore 13 as core 16 on socket 0 00:04:21.763 EAL: Detected lcore 14 as core 17 on socket 0 00:04:21.763 EAL: Detected lcore 15 as core 18 on socket 0 00:04:21.763 EAL: Detected lcore 16 as core 19 on socket 0 00:04:21.763 EAL: Detected lcore 17 as core 20 on socket 0 00:04:21.763 EAL: Detected lcore 18 as core 21 on socket 0 00:04:21.763 EAL: Detected lcore 19 as core 25 on socket 0 00:04:21.763 EAL: Detected lcore 20 as core 26 on socket 0 00:04:21.763 EAL: Detected lcore 21 as core 27 on socket 0 00:04:21.763 EAL: Detected lcore 22 as core 28 on socket 0 00:04:21.763 EAL: Detected lcore 23 as core 29 on socket 0 00:04:21.763 EAL: Detected lcore 24 as core 0 on socket 1 00:04:21.763 EAL: Detected lcore 25 as core 1 on socket 1 00:04:21.763 EAL: Detected lcore 26 as core 2 on socket 1 00:04:21.763 EAL: Detected lcore 27 as core 3 on socket 1 00:04:21.763 EAL: Detected lcore 28 as core 4 on socket 1 00:04:21.763 EAL: Detected lcore 29 as core 5 on socket 1 00:04:21.763 EAL: Detected lcore 30 as core 6 on socket 1 00:04:21.763 EAL: Detected lcore 31 as core 8 on socket 1 00:04:21.763 EAL: Detected lcore 32 as core 10 on socket 1 00:04:21.763 EAL: Detected lcore 33 as core 11 on socket 1 00:04:21.763 EAL: Detected lcore 34 as core 12 on socket 1 00:04:21.763 EAL: Detected lcore 35 as core 13 on socket 1 00:04:21.763 EAL: Detected lcore 36 as core 16 on socket 1 00:04:21.763 EAL: Detected lcore 37 as core 17 on socket 1 00:04:21.763 EAL: Detected lcore 38 as core 18 on socket 1 00:04:21.763 EAL: Detected lcore 39 as core 19 on socket 1 00:04:21.763 EAL: Detected lcore 40 as core 20 on socket 1 00:04:21.763 EAL: Detected lcore 41 as core 21 on socket 1 00:04:21.763 EAL: Detected lcore 42 as core 24 on socket 1 00:04:21.763 EAL: Detected lcore 43 as core 25 on socket 1 00:04:21.763 EAL: Detected lcore 44 as core 26 on socket 1 00:04:21.763 EAL: Detected lcore 45 as core 27 on socket 1 00:04:21.763 EAL: Detected lcore 46 as core 28 on socket 1 00:04:21.763 EAL: Detected lcore 47 as core 29 on socket 1 00:04:21.763 EAL: Detected lcore 48 as core 0 on socket 0 00:04:21.763 EAL: Detected lcore 49 as core 1 on socket 0 00:04:21.763 EAL: Detected lcore 50 as core 2 on socket 0 00:04:21.763 EAL: Detected lcore 51 as core 3 on socket 0 00:04:21.763 EAL: Detected lcore 52 as core 4 on socket 0 00:04:21.763 EAL: Detected lcore 53 as core 5 on socket 0 00:04:21.763 EAL: Detected lcore 54 as core 6 on socket 0 00:04:21.763 EAL: Detected lcore 55 as core 8 on socket 0 00:04:21.763 EAL: Detected lcore 56 as core 9 on socket 0 00:04:21.763 EAL: Detected lcore 57 as core 10 on socket 0 00:04:21.763 EAL: Detected lcore 58 as core 11 on socket 0 00:04:21.763 EAL: Detected lcore 59 as core 12 on socket 0 00:04:21.763 EAL: Detected lcore 60 as core 13 on socket 0 00:04:21.763 EAL: Detected lcore 61 as core 16 on socket 0 00:04:21.763 EAL: Detected lcore 62 as core 17 on socket 0 00:04:21.763 EAL: Detected lcore 63 as core 18 on socket 0 00:04:21.763 EAL: Detected lcore 64 as core 19 on socket 0 00:04:21.763 EAL: Detected lcore 65 as core 20 on socket 0 00:04:21.763 EAL: Detected lcore 66 as core 21 on socket 0 00:04:21.763 EAL: Detected lcore 67 as core 25 on socket 0 00:04:21.763 EAL: Detected lcore 68 as core 26 on socket 0 00:04:21.763 EAL: Detected lcore 69 as core 27 on socket 0 00:04:21.763 EAL: Detected lcore 70 as core 28 on socket 0 00:04:21.763 EAL: Detected lcore 71 as core 29 on socket 0 00:04:21.763 EAL: Detected lcore 72 as core 0 on socket 1 00:04:21.763 EAL: Detected lcore 73 as core 1 on socket 1 00:04:21.763 EAL: Detected lcore 74 as core 2 on socket 1 00:04:21.763 EAL: Detected lcore 75 as core 3 on socket 1 00:04:21.763 EAL: Detected lcore 76 as core 4 on socket 1 00:04:21.763 EAL: Detected lcore 77 as core 5 on socket 1 00:04:21.763 EAL: Detected lcore 78 as core 6 on socket 1 00:04:21.763 EAL: Detected lcore 79 as core 8 on socket 1 00:04:21.763 EAL: Detected lcore 80 as core 10 on socket 1 00:04:21.763 EAL: Detected lcore 81 as core 11 on socket 1 00:04:21.763 EAL: Detected lcore 82 as core 12 on socket 1 00:04:21.763 EAL: Detected lcore 83 as core 13 on socket 1 00:04:21.763 EAL: Detected lcore 84 as core 16 on socket 1 00:04:21.763 EAL: Detected lcore 85 as core 17 on socket 1 00:04:21.763 EAL: Detected lcore 86 as core 18 on socket 1 00:04:21.763 EAL: Detected lcore 87 as core 19 on socket 1 00:04:21.763 EAL: Detected lcore 88 as core 20 on socket 1 00:04:21.763 EAL: Detected lcore 89 as core 21 on socket 1 00:04:21.763 EAL: Detected lcore 90 as core 24 on socket 1 00:04:21.763 EAL: Detected lcore 91 as core 25 on socket 1 00:04:21.763 EAL: Detected lcore 92 as core 26 on socket 1 00:04:21.763 EAL: Detected lcore 93 as core 27 on socket 1 00:04:21.763 EAL: Detected lcore 94 as core 28 on socket 1 00:04:21.763 EAL: Detected lcore 95 as core 29 on socket 1 00:04:21.763 EAL: Maximum logical cores by configuration: 128 00:04:21.763 EAL: Detected CPU lcores: 96 00:04:21.763 EAL: Detected NUMA nodes: 2 00:04:21.763 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:21.763 EAL: Detected shared linkage of DPDK 00:04:21.763 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.763 EAL: Bus pci wants IOVA as 'DC' 00:04:21.763 EAL: Buses did not request a specific IOVA mode. 00:04:21.763 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:21.763 EAL: Selected IOVA mode 'VA' 00:04:21.763 EAL: Probing VFIO support... 00:04:21.763 EAL: IOMMU type 1 (Type 1) is supported 00:04:21.763 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:21.763 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:21.763 EAL: VFIO support initialized 00:04:21.763 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.763 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.763 EAL: Setting up physically contiguous memory... 00:04:21.763 EAL: Setting maximum number of open files to 524288 00:04:21.763 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.763 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:21.763 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.763 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.763 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.763 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.763 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.763 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:21.763 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:21.763 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.763 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:21.763 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.763 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.763 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:21.764 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:21.764 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.764 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:21.764 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.764 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.764 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:21.764 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:21.764 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.764 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:21.764 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.764 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.764 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:21.764 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:21.764 EAL: Hugepages will be freed exactly as allocated. 00:04:21.764 EAL: No shared files mode enabled, IPC is disabled 00:04:21.764 EAL: No shared files mode enabled, IPC is disabled 00:04:21.764 EAL: TSC frequency is ~2100000 KHz 00:04:21.764 EAL: Main lcore 0 is ready (tid=7f69eb1bba40;cpuset=[0]) 00:04:21.764 EAL: Trying to obtain current memory policy. 00:04:21.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.764 EAL: Restoring previous memory policy: 0 00:04:21.764 EAL: request: mp_malloc_sync 00:04:21.764 EAL: No shared files mode enabled, IPC is disabled 00:04:21.764 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.764 EAL: No shared files mode enabled, IPC is disabled 00:04:22.022 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:22.022 EAL: Mem event callback 'spdk:(nil)' registered 00:04:22.022 00:04:22.022 00:04:22.022 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.022 http://cunit.sourceforge.net/ 00:04:22.022 00:04:22.022 00:04:22.022 Suite: components_suite 00:04:22.280 Test: vtophys_malloc_test ...passed 00:04:22.280 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.280 EAL: Restoring previous memory policy: 4 00:04:22.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.280 EAL: request: mp_malloc_sync 00:04:22.280 EAL: No shared files mode enabled, IPC is disabled 00:04:22.280 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.280 EAL: request: mp_malloc_sync 00:04:22.280 EAL: No shared files mode enabled, IPC is disabled 00:04:22.280 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.280 EAL: Trying to obtain current memory policy. 00:04:22.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.280 EAL: Restoring previous memory policy: 4 00:04:22.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.280 EAL: request: mp_malloc_sync 00:04:22.280 EAL: No shared files mode enabled, IPC is disabled 00:04:22.280 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.280 EAL: request: mp_malloc_sync 00:04:22.280 EAL: No shared files mode enabled, IPC is disabled 00:04:22.280 EAL: Heap on socket 0 was shrunk by 6MB 00:04:22.280 EAL: Trying to obtain current memory policy. 00:04:22.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.280 EAL: Restoring previous memory policy: 4 00:04:22.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.280 EAL: request: mp_malloc_sync 00:04:22.280 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was expanded by 10MB 00:04:22.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.281 EAL: request: mp_malloc_sync 00:04:22.281 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was shrunk by 10MB 00:04:22.281 EAL: Trying to obtain current memory policy. 00:04:22.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.281 EAL: Restoring previous memory policy: 4 00:04:22.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.281 EAL: request: mp_malloc_sync 00:04:22.281 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was expanded by 18MB 00:04:22.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.281 EAL: request: mp_malloc_sync 00:04:22.281 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was shrunk by 18MB 00:04:22.281 EAL: Trying to obtain current memory policy. 00:04:22.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.281 EAL: Restoring previous memory policy: 4 00:04:22.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.281 EAL: request: mp_malloc_sync 00:04:22.281 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.281 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.281 EAL: request: mp_malloc_sync 00:04:22.281 EAL: No shared files mode enabled, IPC is disabled 00:04:22.281 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.539 EAL: Trying to obtain current memory policy. 00:04:22.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.539 EAL: Restoring previous memory policy: 4 00:04:22.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.539 EAL: request: mp_malloc_sync 00:04:22.539 EAL: No shared files mode enabled, IPC is disabled 00:04:22.539 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.539 EAL: request: mp_malloc_sync 00:04:22.539 EAL: No shared files mode enabled, IPC is disabled 00:04:22.539 EAL: Heap on socket 0 was shrunk by 66MB 00:04:22.798 EAL: Trying to obtain current memory policy. 00:04:22.798 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.798 EAL: Restoring previous memory policy: 4 00:04:22.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.798 EAL: request: mp_malloc_sync 00:04:22.798 EAL: No shared files mode enabled, IPC is disabled 00:04:22.798 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.057 EAL: request: mp_malloc_sync 00:04:23.057 EAL: No shared files mode enabled, IPC is disabled 00:04:23.057 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.057 EAL: Trying to obtain current memory policy. 00:04:23.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.316 EAL: Restoring previous memory policy: 4 00:04:23.316 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.316 EAL: request: mp_malloc_sync 00:04:23.316 EAL: No shared files mode enabled, IPC is disabled 00:04:23.316 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.575 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.575 EAL: request: mp_malloc_sync 00:04:23.575 EAL: No shared files mode enabled, IPC is disabled 00:04:23.575 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.143 EAL: Trying to obtain current memory policy. 00:04:24.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.143 EAL: Restoring previous memory policy: 4 00:04:24.143 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.143 EAL: request: mp_malloc_sync 00:04:24.143 EAL: No shared files mode enabled, IPC is disabled 00:04:24.143 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.079 EAL: request: mp_malloc_sync 00:04:25.079 EAL: No shared files mode enabled, IPC is disabled 00:04:25.079 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.015 EAL: Trying to obtain current memory policy. 00:04:26.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.273 EAL: Restoring previous memory policy: 4 00:04:26.273 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.274 EAL: request: mp_malloc_sync 00:04:26.274 EAL: No shared files mode enabled, IPC is disabled 00:04:26.274 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.178 EAL: request: mp_malloc_sync 00:04:28.178 EAL: No shared files mode enabled, IPC is disabled 00:04:28.178 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.080 passed 00:04:30.080 00:04:30.080 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.080 suites 1 1 n/a 0 0 00:04:30.080 tests 2 2 2 0 0 00:04:30.080 asserts 497 497 497 0 n/a 00:04:30.080 00:04:30.080 Elapsed time = 7.705 seconds 00:04:30.080 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.080 EAL: request: mp_malloc_sync 00:04:30.080 EAL: No shared files mode enabled, IPC is disabled 00:04:30.080 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.080 EAL: No shared files mode enabled, IPC is disabled 00:04:30.080 EAL: No shared files mode enabled, IPC is disabled 00:04:30.080 EAL: No shared files mode enabled, IPC is disabled 00:04:30.080 00:04:30.080 real 0m7.956s 00:04:30.080 user 0m7.142s 00:04:30.080 sys 0m0.756s 00:04:30.080 15:08:57 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.080 15:08:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:30.080 ************************************ 00:04:30.080 END TEST env_vtophys 00:04:30.080 ************************************ 00:04:30.080 15:08:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.080 15:08:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.080 15:08:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.080 15:08:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.080 ************************************ 00:04:30.080 START TEST env_pci 00:04:30.080 ************************************ 00:04:30.080 15:08:57 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.080 00:04:30.080 00:04:30.080 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.080 http://cunit.sourceforge.net/ 00:04:30.080 00:04:30.080 00:04:30.080 Suite: pci 00:04:30.080 Test: pci_hook ...[2024-11-06 15:08:57.330374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3636747 has claimed it 00:04:30.080 EAL: Cannot find device (10000:00:01.0) 00:04:30.080 EAL: Failed to attach device on primary process 00:04:30.080 passed 00:04:30.080 00:04:30.080 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.080 suites 1 1 n/a 0 0 00:04:30.080 tests 1 1 1 0 0 00:04:30.080 asserts 25 25 25 0 n/a 00:04:30.080 00:04:30.080 Elapsed time = 0.042 seconds 00:04:30.080 00:04:30.080 real 0m0.117s 00:04:30.080 user 0m0.052s 00:04:30.080 sys 0m0.064s 00:04:30.081 15:08:57 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.081 15:08:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.081 ************************************ 00:04:30.081 END TEST env_pci 00:04:30.081 ************************************ 00:04:30.081 15:08:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.081 15:08:57 env -- env/env.sh@15 -- # uname 00:04:30.081 15:08:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.081 15:08:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.081 15:08:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.081 15:08:57 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:30.081 15:08:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.081 15:08:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.081 ************************************ 00:04:30.081 START TEST env_dpdk_post_init 00:04:30.081 ************************************ 00:04:30.081 15:08:57 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.081 EAL: Detected CPU lcores: 96 00:04:30.081 EAL: Detected NUMA nodes: 2 00:04:30.081 EAL: Detected shared linkage of DPDK 00:04:30.081 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.081 EAL: Selected IOVA mode 'VA' 00:04:30.081 EAL: VFIO support initialized 00:04:30.081 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.081 EAL: Using IOMMU type 1 (Type 1) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:30.340 EAL: Ignore mapping IO port bar(1) 00:04:30.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:31.276 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:31.276 EAL: Ignore mapping IO port bar(1) 00:04:31.276 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:34.561 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:34.561 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:35.128 Starting DPDK initialization... 00:04:35.128 Starting SPDK post initialization... 00:04:35.128 SPDK NVMe probe 00:04:35.128 Attaching to 0000:5e:00.0 00:04:35.128 Attached to 0000:5e:00.0 00:04:35.128 Cleaning up... 00:04:35.128 00:04:35.128 real 0m5.100s 00:04:35.128 user 0m3.603s 00:04:35.128 sys 0m0.557s 00:04:35.128 15:09:02 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.128 15:09:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 ************************************ 00:04:35.128 END TEST env_dpdk_post_init 00:04:35.128 ************************************ 00:04:35.128 15:09:02 env -- env/env.sh@26 -- # uname 00:04:35.128 15:09:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.128 15:09:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.128 15:09:02 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.128 15:09:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.128 15:09:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.128 ************************************ 00:04:35.128 START TEST env_mem_callbacks 00:04:35.128 ************************************ 00:04:35.128 15:09:02 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.128 EAL: Detected CPU lcores: 96 00:04:35.128 EAL: Detected NUMA nodes: 2 00:04:35.128 EAL: Detected shared linkage of DPDK 00:04:35.128 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.128 EAL: Selected IOVA mode 'VA' 00:04:35.128 EAL: VFIO support initialized 00:04:35.128 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.128 00:04:35.128 00:04:35.128 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.128 http://cunit.sourceforge.net/ 00:04:35.128 00:04:35.128 00:04:35.128 Suite: memory 00:04:35.128 Test: test ... 00:04:35.128 register 0x200000200000 2097152 00:04:35.128 malloc 3145728 00:04:35.128 register 0x200000400000 4194304 00:04:35.387 buf 0x2000004fffc0 len 3145728 PASSED 00:04:35.387 malloc 64 00:04:35.387 buf 0x2000004ffec0 len 64 PASSED 00:04:35.387 malloc 4194304 00:04:35.387 register 0x200000800000 6291456 00:04:35.387 buf 0x2000009fffc0 len 4194304 PASSED 00:04:35.387 free 0x2000004fffc0 3145728 00:04:35.387 free 0x2000004ffec0 64 00:04:35.387 unregister 0x200000400000 4194304 PASSED 00:04:35.387 free 0x2000009fffc0 4194304 00:04:35.387 unregister 0x200000800000 6291456 PASSED 00:04:35.387 malloc 8388608 00:04:35.387 register 0x200000400000 10485760 00:04:35.387 buf 0x2000005fffc0 len 8388608 PASSED 00:04:35.387 free 0x2000005fffc0 8388608 00:04:35.387 unregister 0x200000400000 10485760 PASSED 00:04:35.387 passed 00:04:35.387 00:04:35.387 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.387 suites 1 1 n/a 0 0 00:04:35.387 tests 1 1 1 0 0 00:04:35.387 asserts 15 15 15 0 n/a 00:04:35.387 00:04:35.387 Elapsed time = 0.078 seconds 00:04:35.387 00:04:35.387 real 0m0.186s 00:04:35.387 user 0m0.102s 00:04:35.387 sys 0m0.083s 00:04:35.387 15:09:02 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.387 15:09:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:35.387 ************************************ 00:04:35.387 END TEST env_mem_callbacks 00:04:35.387 ************************************ 00:04:35.387 00:04:35.387 real 0m14.155s 00:04:35.387 user 0m11.375s 00:04:35.387 sys 0m1.814s 00:04:35.387 15:09:02 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.387 15:09:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.387 ************************************ 00:04:35.387 END TEST env 00:04:35.387 ************************************ 00:04:35.387 15:09:02 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.387 15:09:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.387 15:09:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.387 15:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:35.387 ************************************ 00:04:35.387 START TEST rpc 00:04:35.387 ************************************ 00:04:35.387 15:09:02 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.646 * Looking for test storage... 00:04:35.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.646 15:09:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.646 15:09:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.646 15:09:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.646 15:09:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.646 15:09:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.646 15:09:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.646 15:09:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.646 15:09:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.646 15:09:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.646 15:09:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.646 15:09:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.646 15:09:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.646 15:09:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.646 15:09:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.646 15:09:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.646 --rc genhtml_branch_coverage=1 00:04:35.646 --rc genhtml_function_coverage=1 00:04:35.646 --rc genhtml_legend=1 00:04:35.646 --rc geninfo_all_blocks=1 00:04:35.646 --rc geninfo_unexecuted_blocks=1 00:04:35.646 00:04:35.646 ' 00:04:35.646 15:09:03 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.646 --rc genhtml_branch_coverage=1 00:04:35.646 --rc genhtml_function_coverage=1 00:04:35.646 --rc genhtml_legend=1 00:04:35.646 --rc geninfo_all_blocks=1 00:04:35.646 --rc geninfo_unexecuted_blocks=1 00:04:35.646 00:04:35.647 ' 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.647 --rc genhtml_branch_coverage=1 00:04:35.647 --rc genhtml_function_coverage=1 00:04:35.647 --rc genhtml_legend=1 00:04:35.647 --rc geninfo_all_blocks=1 00:04:35.647 --rc geninfo_unexecuted_blocks=1 00:04:35.647 00:04:35.647 ' 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.647 --rc genhtml_branch_coverage=1 00:04:35.647 --rc genhtml_function_coverage=1 00:04:35.647 --rc genhtml_legend=1 00:04:35.647 --rc geninfo_all_blocks=1 00:04:35.647 --rc geninfo_unexecuted_blocks=1 00:04:35.647 00:04:35.647 ' 00:04:35.647 15:09:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3637933 00:04:35.647 15:09:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:35.647 15:09:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.647 15:09:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3637933 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@833 -- # '[' -z 3637933 ']' 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.647 15:09:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.647 [2024-11-06 15:09:03.211531] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:35.647 [2024-11-06 15:09:03.211622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637933 ] 00:04:35.906 [2024-11-06 15:09:03.321453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.906 [2024-11-06 15:09:03.431985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:35.906 [2024-11-06 15:09:03.432023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3637933' to capture a snapshot of events at runtime. 00:04:35.906 [2024-11-06 15:09:03.432036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:35.906 [2024-11-06 15:09:03.432046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:35.906 [2024-11-06 15:09:03.432061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3637933 for offline analysis/debug. 00:04:35.906 [2024-11-06 15:09:03.433476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.861 15:09:04 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.861 15:09:04 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:36.861 15:09:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.861 15:09:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.861 15:09:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:36.861 15:09:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:36.861 15:09:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.861 15:09:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.861 15:09:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.861 ************************************ 00:04:36.861 START TEST rpc_integrity 00:04:36.861 ************************************ 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.861 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.861 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.861 { 00:04:36.861 "name": "Malloc0", 00:04:36.861 "aliases": [ 00:04:36.861 "3505a950-e1dc-4752-9e28-b6704bdf3f4d" 00:04:36.861 ], 00:04:36.861 "product_name": "Malloc disk", 00:04:36.861 "block_size": 512, 00:04:36.861 "num_blocks": 16384, 00:04:36.861 "uuid": "3505a950-e1dc-4752-9e28-b6704bdf3f4d", 00:04:36.861 "assigned_rate_limits": { 00:04:36.861 "rw_ios_per_sec": 0, 00:04:36.861 "rw_mbytes_per_sec": 0, 00:04:36.861 "r_mbytes_per_sec": 0, 00:04:36.861 "w_mbytes_per_sec": 0 00:04:36.861 }, 00:04:36.861 "claimed": false, 00:04:36.861 "zoned": false, 00:04:36.861 "supported_io_types": { 00:04:36.861 "read": true, 00:04:36.861 "write": true, 00:04:36.861 "unmap": true, 00:04:36.861 "flush": true, 00:04:36.861 "reset": true, 00:04:36.861 "nvme_admin": false, 00:04:36.862 "nvme_io": false, 00:04:36.862 "nvme_io_md": false, 00:04:36.862 "write_zeroes": true, 00:04:36.862 "zcopy": true, 00:04:36.862 "get_zone_info": false, 00:04:36.862 "zone_management": false, 00:04:36.862 "zone_append": false, 00:04:36.862 "compare": false, 00:04:36.862 "compare_and_write": false, 00:04:36.862 "abort": true, 00:04:36.862 "seek_hole": false, 00:04:36.862 "seek_data": false, 00:04:36.862 "copy": true, 00:04:36.862 "nvme_iov_md": false 00:04:36.862 }, 00:04:36.862 "memory_domains": [ 00:04:36.862 { 00:04:36.862 "dma_device_id": "system", 00:04:36.862 "dma_device_type": 1 00:04:36.862 }, 00:04:36.862 { 00:04:36.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.862 "dma_device_type": 2 00:04:36.862 } 00:04:36.862 ], 00:04:36.862 "driver_specific": {} 00:04:36.862 } 00:04:36.862 ]' 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 [2024-11-06 15:09:04.442795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:36.862 [2024-11-06 15:09:04.442846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.862 [2024-11-06 15:09:04.442867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:04:36.862 [2024-11-06 15:09:04.442878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.862 [2024-11-06 15:09:04.444834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.862 [2024-11-06 15:09:04.444861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.862 Passthru0 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.862 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.862 { 00:04:36.862 "name": "Malloc0", 00:04:36.862 "aliases": [ 00:04:36.862 "3505a950-e1dc-4752-9e28-b6704bdf3f4d" 00:04:36.862 ], 00:04:36.862 "product_name": "Malloc disk", 00:04:36.862 "block_size": 512, 00:04:36.862 "num_blocks": 16384, 00:04:36.862 "uuid": "3505a950-e1dc-4752-9e28-b6704bdf3f4d", 00:04:36.862 "assigned_rate_limits": { 00:04:36.862 "rw_ios_per_sec": 0, 00:04:36.862 "rw_mbytes_per_sec": 0, 00:04:36.862 "r_mbytes_per_sec": 0, 00:04:36.862 "w_mbytes_per_sec": 0 00:04:36.862 }, 00:04:36.862 "claimed": true, 00:04:36.862 "claim_type": "exclusive_write", 00:04:36.862 "zoned": false, 00:04:36.862 "supported_io_types": { 00:04:36.862 "read": true, 00:04:36.862 "write": true, 00:04:36.862 "unmap": true, 00:04:36.862 "flush": true, 00:04:36.862 "reset": true, 00:04:36.862 "nvme_admin": false, 00:04:36.862 "nvme_io": false, 00:04:36.862 "nvme_io_md": false, 00:04:36.862 "write_zeroes": true, 00:04:36.862 "zcopy": true, 00:04:36.862 "get_zone_info": false, 00:04:36.862 "zone_management": false, 00:04:36.862 "zone_append": false, 00:04:36.862 "compare": false, 00:04:36.862 "compare_and_write": false, 00:04:36.862 "abort": true, 00:04:36.862 "seek_hole": false, 00:04:36.862 "seek_data": false, 00:04:36.862 "copy": true, 00:04:36.862 "nvme_iov_md": false 00:04:36.862 }, 00:04:36.862 "memory_domains": [ 00:04:36.862 { 00:04:36.862 "dma_device_id": "system", 00:04:36.862 "dma_device_type": 1 00:04:36.862 }, 00:04:36.862 { 00:04:36.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.862 "dma_device_type": 2 00:04:36.862 } 00:04:36.862 ], 00:04:36.862 "driver_specific": {} 00:04:36.862 }, 00:04:36.862 { 00:04:36.862 "name": "Passthru0", 00:04:36.862 "aliases": [ 00:04:36.862 "67bedb4f-1abc-5b02-9239-dce13999ab45" 00:04:36.862 ], 00:04:36.862 "product_name": "passthru", 00:04:36.862 "block_size": 512, 00:04:36.862 "num_blocks": 16384, 00:04:36.862 "uuid": "67bedb4f-1abc-5b02-9239-dce13999ab45", 00:04:36.862 "assigned_rate_limits": { 00:04:36.862 "rw_ios_per_sec": 0, 00:04:36.862 "rw_mbytes_per_sec": 0, 00:04:36.862 "r_mbytes_per_sec": 0, 00:04:36.862 "w_mbytes_per_sec": 0 00:04:36.862 }, 00:04:36.862 "claimed": false, 00:04:36.862 "zoned": false, 00:04:36.862 "supported_io_types": { 00:04:36.862 "read": true, 00:04:36.862 "write": true, 00:04:36.862 "unmap": true, 00:04:36.862 "flush": true, 00:04:36.862 "reset": true, 00:04:36.862 "nvme_admin": false, 00:04:36.862 "nvme_io": false, 00:04:36.862 "nvme_io_md": false, 00:04:36.862 "write_zeroes": true, 00:04:36.862 "zcopy": true, 00:04:36.862 "get_zone_info": false, 00:04:36.862 "zone_management": false, 00:04:36.862 "zone_append": false, 00:04:36.862 "compare": false, 00:04:36.862 "compare_and_write": false, 00:04:36.862 "abort": true, 00:04:36.862 "seek_hole": false, 00:04:36.862 "seek_data": false, 00:04:36.862 "copy": true, 00:04:36.862 "nvme_iov_md": false 00:04:36.862 }, 00:04:36.862 "memory_domains": [ 00:04:36.862 { 00:04:36.862 "dma_device_id": "system", 00:04:36.862 "dma_device_type": 1 00:04:36.862 }, 00:04:36.862 { 00:04:36.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.862 "dma_device_type": 2 00:04:36.862 } 00:04:36.862 ], 00:04:36.862 "driver_specific": { 00:04:36.862 "passthru": { 00:04:36.862 "name": "Passthru0", 00:04:36.862 "base_bdev_name": "Malloc0" 00:04:36.862 } 00:04:36.862 } 00:04:36.862 } 00:04:36.862 ]' 00:04:36.862 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.122 15:09:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.122 00:04:37.122 real 0m0.306s 00:04:37.122 user 0m0.175s 00:04:37.122 sys 0m0.035s 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 ************************************ 00:04:37.122 END TEST rpc_integrity 00:04:37.122 ************************************ 00:04:37.122 15:09:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.122 15:09:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.122 15:09:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.122 15:09:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 ************************************ 00:04:37.122 START TEST rpc_plugins 00:04:37.122 ************************************ 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:37.122 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.122 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.122 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.122 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.122 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.122 { 00:04:37.122 "name": "Malloc1", 00:04:37.122 "aliases": [ 00:04:37.122 "6ed6f8d1-615f-4a85-9452-fcdb58d7ab60" 00:04:37.122 ], 00:04:37.122 "product_name": "Malloc disk", 00:04:37.122 "block_size": 4096, 00:04:37.122 "num_blocks": 256, 00:04:37.122 "uuid": "6ed6f8d1-615f-4a85-9452-fcdb58d7ab60", 00:04:37.122 "assigned_rate_limits": { 00:04:37.122 "rw_ios_per_sec": 0, 00:04:37.122 "rw_mbytes_per_sec": 0, 00:04:37.122 "r_mbytes_per_sec": 0, 00:04:37.122 "w_mbytes_per_sec": 0 00:04:37.122 }, 00:04:37.122 "claimed": false, 00:04:37.122 "zoned": false, 00:04:37.122 "supported_io_types": { 00:04:37.122 "read": true, 00:04:37.122 "write": true, 00:04:37.122 "unmap": true, 00:04:37.122 "flush": true, 00:04:37.122 "reset": true, 00:04:37.122 "nvme_admin": false, 00:04:37.122 "nvme_io": false, 00:04:37.122 "nvme_io_md": false, 00:04:37.122 "write_zeroes": true, 00:04:37.122 "zcopy": true, 00:04:37.122 "get_zone_info": false, 00:04:37.122 "zone_management": false, 00:04:37.122 "zone_append": false, 00:04:37.122 "compare": false, 00:04:37.122 "compare_and_write": false, 00:04:37.122 "abort": true, 00:04:37.122 "seek_hole": false, 00:04:37.122 "seek_data": false, 00:04:37.122 "copy": true, 00:04:37.122 "nvme_iov_md": false 00:04:37.122 }, 00:04:37.122 "memory_domains": [ 00:04:37.122 { 00:04:37.122 "dma_device_id": "system", 00:04:37.122 "dma_device_type": 1 00:04:37.122 }, 00:04:37.122 { 00:04:37.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.122 "dma_device_type": 2 00:04:37.122 } 00:04:37.122 ], 00:04:37.122 "driver_specific": {} 00:04:37.122 } 00:04:37.122 ]' 00:04:37.122 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.380 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.380 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.381 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.381 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.381 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.381 15:09:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.381 00:04:37.381 real 0m0.146s 00:04:37.381 user 0m0.089s 00:04:37.381 sys 0m0.016s 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.381 15:09:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 ************************************ 00:04:37.381 END TEST rpc_plugins 00:04:37.381 ************************************ 00:04:37.381 15:09:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.381 15:09:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.381 15:09:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.381 15:09:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 ************************************ 00:04:37.381 START TEST rpc_trace_cmd_test 00:04:37.381 ************************************ 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.381 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3637933", 00:04:37.381 "tpoint_group_mask": "0x8", 00:04:37.381 "iscsi_conn": { 00:04:37.381 "mask": "0x2", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "scsi": { 00:04:37.381 "mask": "0x4", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev": { 00:04:37.381 "mask": "0x8", 00:04:37.381 "tpoint_mask": "0xffffffffffffffff" 00:04:37.381 }, 00:04:37.381 "nvmf_rdma": { 00:04:37.381 "mask": "0x10", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvmf_tcp": { 00:04:37.381 "mask": "0x20", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "ftl": { 00:04:37.381 "mask": "0x40", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "blobfs": { 00:04:37.381 "mask": "0x80", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "dsa": { 00:04:37.381 "mask": "0x200", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "thread": { 00:04:37.381 "mask": "0x400", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvme_pcie": { 00:04:37.381 "mask": "0x800", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "iaa": { 00:04:37.381 "mask": "0x1000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "nvme_tcp": { 00:04:37.381 "mask": "0x2000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev_nvme": { 00:04:37.381 "mask": "0x4000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "sock": { 00:04:37.381 "mask": "0x8000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "blob": { 00:04:37.381 "mask": "0x10000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "bdev_raid": { 00:04:37.381 "mask": "0x20000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 }, 00:04:37.381 "scheduler": { 00:04:37.381 "mask": "0x40000", 00:04:37.381 "tpoint_mask": "0x0" 00:04:37.381 } 00:04:37.381 }' 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:37.381 15:09:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.381 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.381 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.640 00:04:37.640 real 0m0.221s 00:04:37.640 user 0m0.190s 00:04:37.640 sys 0m0.025s 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.640 15:09:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 ************************************ 00:04:37.640 END TEST rpc_trace_cmd_test 00:04:37.640 ************************************ 00:04:37.640 15:09:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.640 15:09:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.640 15:09:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.640 15:09:05 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.640 15:09:05 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.640 15:09:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 ************************************ 00:04:37.640 START TEST rpc_daemon_integrity 00:04:37.640 ************************************ 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.640 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.899 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.899 { 00:04:37.899 "name": "Malloc2", 00:04:37.899 "aliases": [ 00:04:37.899 "60bed6ee-d63f-4ce9-b0b7-07d2a8036d98" 00:04:37.899 ], 00:04:37.899 "product_name": "Malloc disk", 00:04:37.899 "block_size": 512, 00:04:37.899 "num_blocks": 16384, 00:04:37.899 "uuid": "60bed6ee-d63f-4ce9-b0b7-07d2a8036d98", 00:04:37.899 "assigned_rate_limits": { 00:04:37.899 "rw_ios_per_sec": 0, 00:04:37.899 "rw_mbytes_per_sec": 0, 00:04:37.899 "r_mbytes_per_sec": 0, 00:04:37.899 "w_mbytes_per_sec": 0 00:04:37.899 }, 00:04:37.899 "claimed": false, 00:04:37.899 "zoned": false, 00:04:37.899 "supported_io_types": { 00:04:37.899 "read": true, 00:04:37.899 "write": true, 00:04:37.899 "unmap": true, 00:04:37.899 "flush": true, 00:04:37.899 "reset": true, 00:04:37.899 "nvme_admin": false, 00:04:37.899 "nvme_io": false, 00:04:37.899 "nvme_io_md": false, 00:04:37.899 "write_zeroes": true, 00:04:37.899 "zcopy": true, 00:04:37.899 "get_zone_info": false, 00:04:37.899 "zone_management": false, 00:04:37.899 "zone_append": false, 00:04:37.899 "compare": false, 00:04:37.899 "compare_and_write": false, 00:04:37.899 "abort": true, 00:04:37.899 "seek_hole": false, 00:04:37.899 "seek_data": false, 00:04:37.899 "copy": true, 00:04:37.899 "nvme_iov_md": false 00:04:37.899 }, 00:04:37.899 "memory_domains": [ 00:04:37.899 { 00:04:37.899 "dma_device_id": "system", 00:04:37.899 "dma_device_type": 1 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.900 "dma_device_type": 2 00:04:37.900 } 00:04:37.900 ], 00:04:37.900 "driver_specific": {} 00:04:37.900 } 00:04:37.900 ]' 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 [2024-11-06 15:09:05.324342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.900 [2024-11-06 15:09:05.324384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.900 [2024-11-06 15:09:05.324405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000024080 00:04:37.900 [2024-11-06 15:09:05.324416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.900 [2024-11-06 15:09:05.326259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.900 [2024-11-06 15:09:05.326284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.900 Passthru0 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.900 { 00:04:37.900 "name": "Malloc2", 00:04:37.900 "aliases": [ 00:04:37.900 "60bed6ee-d63f-4ce9-b0b7-07d2a8036d98" 00:04:37.900 ], 00:04:37.900 "product_name": "Malloc disk", 00:04:37.900 "block_size": 512, 00:04:37.900 "num_blocks": 16384, 00:04:37.900 "uuid": "60bed6ee-d63f-4ce9-b0b7-07d2a8036d98", 00:04:37.900 "assigned_rate_limits": { 00:04:37.900 "rw_ios_per_sec": 0, 00:04:37.900 "rw_mbytes_per_sec": 0, 00:04:37.900 "r_mbytes_per_sec": 0, 00:04:37.900 "w_mbytes_per_sec": 0 00:04:37.900 }, 00:04:37.900 "claimed": true, 00:04:37.900 "claim_type": "exclusive_write", 00:04:37.900 "zoned": false, 00:04:37.900 "supported_io_types": { 00:04:37.900 "read": true, 00:04:37.900 "write": true, 00:04:37.900 "unmap": true, 00:04:37.900 "flush": true, 00:04:37.900 "reset": true, 00:04:37.900 "nvme_admin": false, 00:04:37.900 "nvme_io": false, 00:04:37.900 "nvme_io_md": false, 00:04:37.900 "write_zeroes": true, 00:04:37.900 "zcopy": true, 00:04:37.900 "get_zone_info": false, 00:04:37.900 "zone_management": false, 00:04:37.900 "zone_append": false, 00:04:37.900 "compare": false, 00:04:37.900 "compare_and_write": false, 00:04:37.900 "abort": true, 00:04:37.900 "seek_hole": false, 00:04:37.900 "seek_data": false, 00:04:37.900 "copy": true, 00:04:37.900 "nvme_iov_md": false 00:04:37.900 }, 00:04:37.900 "memory_domains": [ 00:04:37.900 { 00:04:37.900 "dma_device_id": "system", 00:04:37.900 "dma_device_type": 1 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.900 "dma_device_type": 2 00:04:37.900 } 00:04:37.900 ], 00:04:37.900 "driver_specific": {} 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "name": "Passthru0", 00:04:37.900 "aliases": [ 00:04:37.900 "9a02fe11-50a7-5faa-8a00-a0ab3da3c493" 00:04:37.900 ], 00:04:37.900 "product_name": "passthru", 00:04:37.900 "block_size": 512, 00:04:37.900 "num_blocks": 16384, 00:04:37.900 "uuid": "9a02fe11-50a7-5faa-8a00-a0ab3da3c493", 00:04:37.900 "assigned_rate_limits": { 00:04:37.900 "rw_ios_per_sec": 0, 00:04:37.900 "rw_mbytes_per_sec": 0, 00:04:37.900 "r_mbytes_per_sec": 0, 00:04:37.900 "w_mbytes_per_sec": 0 00:04:37.900 }, 00:04:37.900 "claimed": false, 00:04:37.900 "zoned": false, 00:04:37.900 "supported_io_types": { 00:04:37.900 "read": true, 00:04:37.900 "write": true, 00:04:37.900 "unmap": true, 00:04:37.900 "flush": true, 00:04:37.900 "reset": true, 00:04:37.900 "nvme_admin": false, 00:04:37.900 "nvme_io": false, 00:04:37.900 "nvme_io_md": false, 00:04:37.900 "write_zeroes": true, 00:04:37.900 "zcopy": true, 00:04:37.900 "get_zone_info": false, 00:04:37.900 "zone_management": false, 00:04:37.900 "zone_append": false, 00:04:37.900 "compare": false, 00:04:37.900 "compare_and_write": false, 00:04:37.900 "abort": true, 00:04:37.900 "seek_hole": false, 00:04:37.900 "seek_data": false, 00:04:37.900 "copy": true, 00:04:37.900 "nvme_iov_md": false 00:04:37.900 }, 00:04:37.900 "memory_domains": [ 00:04:37.900 { 00:04:37.900 "dma_device_id": "system", 00:04:37.900 "dma_device_type": 1 00:04:37.900 }, 00:04:37.900 { 00:04:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.900 "dma_device_type": 2 00:04:37.900 } 00:04:37.900 ], 00:04:37.900 "driver_specific": { 00:04:37.900 "passthru": { 00:04:37.900 "name": "Passthru0", 00:04:37.900 "base_bdev_name": "Malloc2" 00:04:37.900 } 00:04:37.900 } 00:04:37.900 } 00:04:37.900 ]' 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.900 00:04:37.900 real 0m0.307s 00:04:37.900 user 0m0.169s 00:04:37.900 sys 0m0.039s 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.900 15:09:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.900 ************************************ 00:04:37.900 END TEST rpc_daemon_integrity 00:04:37.900 ************************************ 00:04:37.900 15:09:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.900 15:09:05 rpc -- rpc/rpc.sh@84 -- # killprocess 3637933 00:04:37.900 15:09:05 rpc -- common/autotest_common.sh@952 -- # '[' -z 3637933 ']' 00:04:37.900 15:09:05 rpc -- common/autotest_common.sh@956 -- # kill -0 3637933 00:04:37.900 15:09:05 rpc -- common/autotest_common.sh@957 -- # uname 00:04:37.900 15:09:05 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.900 15:09:05 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3637933 00:04:38.159 15:09:05 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.159 15:09:05 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.159 15:09:05 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3637933' 00:04:38.159 killing process with pid 3637933 00:04:38.159 15:09:05 rpc -- common/autotest_common.sh@971 -- # kill 3637933 00:04:38.159 15:09:05 rpc -- common/autotest_common.sh@976 -- # wait 3637933 00:04:40.692 00:04:40.692 real 0m4.877s 00:04:40.692 user 0m5.492s 00:04:40.692 sys 0m0.828s 00:04:40.692 15:09:07 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.692 15:09:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.692 ************************************ 00:04:40.692 END TEST rpc 00:04:40.692 ************************************ 00:04:40.692 15:09:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.692 15:09:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.692 15:09:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.692 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:40.692 ************************************ 00:04:40.692 START TEST skip_rpc 00:04:40.692 ************************************ 00:04:40.692 15:09:07 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.692 * Looking for test storage... 00:04:40.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.692 15:09:07 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.692 15:09:07 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.692 15:09:07 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.692 15:09:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.692 --rc genhtml_branch_coverage=1 00:04:40.692 --rc genhtml_function_coverage=1 00:04:40.692 --rc genhtml_legend=1 00:04:40.692 --rc geninfo_all_blocks=1 00:04:40.692 --rc geninfo_unexecuted_blocks=1 00:04:40.692 00:04:40.692 ' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.692 --rc genhtml_branch_coverage=1 00:04:40.692 --rc genhtml_function_coverage=1 00:04:40.692 --rc genhtml_legend=1 00:04:40.692 --rc geninfo_all_blocks=1 00:04:40.692 --rc geninfo_unexecuted_blocks=1 00:04:40.692 00:04:40.692 ' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.692 --rc genhtml_branch_coverage=1 00:04:40.692 --rc genhtml_function_coverage=1 00:04:40.692 --rc genhtml_legend=1 00:04:40.692 --rc geninfo_all_blocks=1 00:04:40.692 --rc geninfo_unexecuted_blocks=1 00:04:40.692 00:04:40.692 ' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.692 --rc genhtml_branch_coverage=1 00:04:40.692 --rc genhtml_function_coverage=1 00:04:40.692 --rc genhtml_legend=1 00:04:40.692 --rc geninfo_all_blocks=1 00:04:40.692 --rc geninfo_unexecuted_blocks=1 00:04:40.692 00:04:40.692 ' 00:04:40.692 15:09:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.692 15:09:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.692 15:09:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.692 15:09:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.692 ************************************ 00:04:40.692 START TEST skip_rpc 00:04:40.692 ************************************ 00:04:40.692 15:09:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:40.692 15:09:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3639033 00:04:40.692 15:09:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.692 15:09:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.692 15:09:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.692 [2024-11-06 15:09:08.201543] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:40.692 [2024-11-06 15:09:08.201631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639033 ] 00:04:40.692 [2024-11-06 15:09:08.325054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.951 [2024-11-06 15:09:08.428446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3639033 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3639033 ']' 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3639033 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3639033 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3639033' 00:04:46.265 killing process with pid 3639033 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3639033 00:04:46.265 15:09:13 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3639033 00:04:48.213 00:04:48.213 real 0m7.337s 00:04:48.213 user 0m6.959s 00:04:48.213 sys 0m0.399s 00:04:48.213 15:09:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.213 15:09:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.213 ************************************ 00:04:48.213 END TEST skip_rpc 00:04:48.213 ************************************ 00:04:48.213 15:09:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.213 15:09:15 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.213 15:09:15 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.213 15:09:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.213 ************************************ 00:04:48.213 START TEST skip_rpc_with_json 00:04:48.213 ************************************ 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3640613 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3640613 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3640613 ']' 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.213 15:09:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.213 [2024-11-06 15:09:15.613489] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:48.213 [2024-11-06 15:09:15.613584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640613 ] 00:04:48.213 [2024-11-06 15:09:15.736824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.471 [2024-11-06 15:09:15.854145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.039 [2024-11-06 15:09:16.653946] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.039 request: 00:04:49.039 { 00:04:49.039 "trtype": "tcp", 00:04:49.039 "method": "nvmf_get_transports", 00:04:49.039 "req_id": 1 00:04:49.039 } 00:04:49.039 Got JSON-RPC error response 00:04:49.039 response: 00:04:49.039 { 00:04:49.039 "code": -19, 00:04:49.039 "message": "No such device" 00:04:49.039 } 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.039 [2024-11-06 15:09:16.666074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.039 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.298 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.298 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.298 { 00:04:49.298 "subsystems": [ 00:04:49.298 { 00:04:49.298 "subsystem": "fsdev", 00:04:49.298 "config": [ 00:04:49.298 { 00:04:49.298 "method": "fsdev_set_opts", 00:04:49.298 "params": { 00:04:49.298 "fsdev_io_pool_size": 65535, 00:04:49.298 "fsdev_io_cache_size": 256 00:04:49.298 } 00:04:49.298 } 00:04:49.298 ] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "keyring", 00:04:49.298 "config": [] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "iobuf", 00:04:49.298 "config": [ 00:04:49.298 { 00:04:49.298 "method": "iobuf_set_options", 00:04:49.298 "params": { 00:04:49.298 "small_pool_count": 8192, 00:04:49.298 "large_pool_count": 1024, 00:04:49.298 "small_bufsize": 8192, 00:04:49.298 "large_bufsize": 135168, 00:04:49.298 "enable_numa": false 00:04:49.298 } 00:04:49.298 } 00:04:49.298 ] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "sock", 00:04:49.298 "config": [ 00:04:49.298 { 00:04:49.298 "method": "sock_set_default_impl", 00:04:49.298 "params": { 00:04:49.298 "impl_name": "posix" 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "sock_impl_set_options", 00:04:49.298 "params": { 00:04:49.298 "impl_name": "ssl", 00:04:49.298 "recv_buf_size": 4096, 00:04:49.298 "send_buf_size": 4096, 00:04:49.298 "enable_recv_pipe": true, 00:04:49.298 "enable_quickack": false, 00:04:49.298 "enable_placement_id": 0, 00:04:49.298 "enable_zerocopy_send_server": true, 00:04:49.298 "enable_zerocopy_send_client": false, 00:04:49.298 "zerocopy_threshold": 0, 00:04:49.298 "tls_version": 0, 00:04:49.298 "enable_ktls": false 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "sock_impl_set_options", 00:04:49.298 "params": { 00:04:49.298 "impl_name": "posix", 00:04:49.298 "recv_buf_size": 2097152, 00:04:49.298 "send_buf_size": 2097152, 00:04:49.298 "enable_recv_pipe": true, 00:04:49.298 "enable_quickack": false, 00:04:49.298 "enable_placement_id": 0, 00:04:49.298 "enable_zerocopy_send_server": true, 00:04:49.298 "enable_zerocopy_send_client": false, 00:04:49.298 "zerocopy_threshold": 0, 00:04:49.298 "tls_version": 0, 00:04:49.298 "enable_ktls": false 00:04:49.298 } 00:04:49.298 } 00:04:49.298 ] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "vmd", 00:04:49.298 "config": [] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "accel", 00:04:49.298 "config": [ 00:04:49.298 { 00:04:49.298 "method": "accel_set_options", 00:04:49.298 "params": { 00:04:49.298 "small_cache_size": 128, 00:04:49.298 "large_cache_size": 16, 00:04:49.298 "task_count": 2048, 00:04:49.298 "sequence_count": 2048, 00:04:49.298 "buf_count": 2048 00:04:49.298 } 00:04:49.298 } 00:04:49.298 ] 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "subsystem": "bdev", 00:04:49.298 "config": [ 00:04:49.298 { 00:04:49.298 "method": "bdev_set_options", 00:04:49.298 "params": { 00:04:49.298 "bdev_io_pool_size": 65535, 00:04:49.298 "bdev_io_cache_size": 256, 00:04:49.298 "bdev_auto_examine": true, 00:04:49.298 "iobuf_small_cache_size": 128, 00:04:49.298 "iobuf_large_cache_size": 16 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "bdev_raid_set_options", 00:04:49.298 "params": { 00:04:49.298 "process_window_size_kb": 1024, 00:04:49.298 "process_max_bandwidth_mb_sec": 0 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "bdev_iscsi_set_options", 00:04:49.298 "params": { 00:04:49.298 "timeout_sec": 30 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "bdev_nvme_set_options", 00:04:49.298 "params": { 00:04:49.298 "action_on_timeout": "none", 00:04:49.298 "timeout_us": 0, 00:04:49.298 "timeout_admin_us": 0, 00:04:49.298 "keep_alive_timeout_ms": 10000, 00:04:49.298 "arbitration_burst": 0, 00:04:49.298 "low_priority_weight": 0, 00:04:49.298 "medium_priority_weight": 0, 00:04:49.298 "high_priority_weight": 0, 00:04:49.298 "nvme_adminq_poll_period_us": 10000, 00:04:49.298 "nvme_ioq_poll_period_us": 0, 00:04:49.298 "io_queue_requests": 0, 00:04:49.298 "delay_cmd_submit": true, 00:04:49.298 "transport_retry_count": 4, 00:04:49.298 "bdev_retry_count": 3, 00:04:49.298 "transport_ack_timeout": 0, 00:04:49.298 "ctrlr_loss_timeout_sec": 0, 00:04:49.298 "reconnect_delay_sec": 0, 00:04:49.298 "fast_io_fail_timeout_sec": 0, 00:04:49.298 "disable_auto_failback": false, 00:04:49.298 "generate_uuids": false, 00:04:49.298 "transport_tos": 0, 00:04:49.298 "nvme_error_stat": false, 00:04:49.298 "rdma_srq_size": 0, 00:04:49.298 "io_path_stat": false, 00:04:49.298 "allow_accel_sequence": false, 00:04:49.298 "rdma_max_cq_size": 0, 00:04:49.298 "rdma_cm_event_timeout_ms": 0, 00:04:49.298 "dhchap_digests": [ 00:04:49.298 "sha256", 00:04:49.298 "sha384", 00:04:49.298 "sha512" 00:04:49.298 ], 00:04:49.298 "dhchap_dhgroups": [ 00:04:49.298 "null", 00:04:49.298 "ffdhe2048", 00:04:49.298 "ffdhe3072", 00:04:49.298 "ffdhe4096", 00:04:49.298 "ffdhe6144", 00:04:49.298 "ffdhe8192" 00:04:49.298 ] 00:04:49.298 } 00:04:49.298 }, 00:04:49.298 { 00:04:49.298 "method": "bdev_nvme_set_hotplug", 00:04:49.298 "params": { 00:04:49.298 "period_us": 100000, 00:04:49.298 "enable": false 00:04:49.299 } 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "method": "bdev_wait_for_examine" 00:04:49.299 } 00:04:49.299 ] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "scsi", 00:04:49.299 "config": null 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "scheduler", 00:04:49.299 "config": [ 00:04:49.299 { 00:04:49.299 "method": "framework_set_scheduler", 00:04:49.299 "params": { 00:04:49.299 "name": "static" 00:04:49.299 } 00:04:49.299 } 00:04:49.299 ] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "vhost_scsi", 00:04:49.299 "config": [] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "vhost_blk", 00:04:49.299 "config": [] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "ublk", 00:04:49.299 "config": [] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "nbd", 00:04:49.299 "config": [] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "nvmf", 00:04:49.299 "config": [ 00:04:49.299 { 00:04:49.299 "method": "nvmf_set_config", 00:04:49.299 "params": { 00:04:49.299 "discovery_filter": "match_any", 00:04:49.299 "admin_cmd_passthru": { 00:04:49.299 "identify_ctrlr": false 00:04:49.299 }, 00:04:49.299 "dhchap_digests": [ 00:04:49.299 "sha256", 00:04:49.299 "sha384", 00:04:49.299 "sha512" 00:04:49.299 ], 00:04:49.299 "dhchap_dhgroups": [ 00:04:49.299 "null", 00:04:49.299 "ffdhe2048", 00:04:49.299 "ffdhe3072", 00:04:49.299 "ffdhe4096", 00:04:49.299 "ffdhe6144", 00:04:49.299 "ffdhe8192" 00:04:49.299 ] 00:04:49.299 } 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "method": "nvmf_set_max_subsystems", 00:04:49.299 "params": { 00:04:49.299 "max_subsystems": 1024 00:04:49.299 } 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "method": "nvmf_set_crdt", 00:04:49.299 "params": { 00:04:49.299 "crdt1": 0, 00:04:49.299 "crdt2": 0, 00:04:49.299 "crdt3": 0 00:04:49.299 } 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "method": "nvmf_create_transport", 00:04:49.299 "params": { 00:04:49.299 "trtype": "TCP", 00:04:49.299 "max_queue_depth": 128, 00:04:49.299 "max_io_qpairs_per_ctrlr": 127, 00:04:49.299 "in_capsule_data_size": 4096, 00:04:49.299 "max_io_size": 131072, 00:04:49.299 "io_unit_size": 131072, 00:04:49.299 "max_aq_depth": 128, 00:04:49.299 "num_shared_buffers": 511, 00:04:49.299 "buf_cache_size": 4294967295, 00:04:49.299 "dif_insert_or_strip": false, 00:04:49.299 "zcopy": false, 00:04:49.299 "c2h_success": true, 00:04:49.299 "sock_priority": 0, 00:04:49.299 "abort_timeout_sec": 1, 00:04:49.299 "ack_timeout": 0, 00:04:49.299 "data_wr_pool_size": 0 00:04:49.299 } 00:04:49.299 } 00:04:49.299 ] 00:04:49.299 }, 00:04:49.299 { 00:04:49.299 "subsystem": "iscsi", 00:04:49.299 "config": [ 00:04:49.299 { 00:04:49.299 "method": "iscsi_set_options", 00:04:49.299 "params": { 00:04:49.299 "node_base": "iqn.2016-06.io.spdk", 00:04:49.299 "max_sessions": 128, 00:04:49.299 "max_connections_per_session": 2, 00:04:49.299 "max_queue_depth": 64, 00:04:49.299 "default_time2wait": 2, 00:04:49.299 "default_time2retain": 20, 00:04:49.299 "first_burst_length": 8192, 00:04:49.299 "immediate_data": true, 00:04:49.299 "allow_duplicated_isid": false, 00:04:49.299 "error_recovery_level": 0, 00:04:49.299 "nop_timeout": 60, 00:04:49.299 "nop_in_interval": 30, 00:04:49.299 "disable_chap": false, 00:04:49.299 "require_chap": false, 00:04:49.299 "mutual_chap": false, 00:04:49.299 "chap_group": 0, 00:04:49.299 "max_large_datain_per_connection": 64, 00:04:49.299 "max_r2t_per_connection": 4, 00:04:49.299 "pdu_pool_size": 36864, 00:04:49.299 "immediate_data_pool_size": 16384, 00:04:49.299 "data_out_pool_size": 2048 00:04:49.299 } 00:04:49.299 } 00:04:49.299 ] 00:04:49.299 } 00:04:49.299 ] 00:04:49.299 } 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3640613 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3640613 ']' 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3640613 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3640613 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3640613' 00:04:49.299 killing process with pid 3640613 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3640613 00:04:49.299 15:09:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3640613 00:04:51.830 15:09:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3641309 00:04:51.831 15:09:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.831 15:09:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3641309 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3641309 ']' 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3641309 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3641309 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3641309' 00:04:57.097 killing process with pid 3641309 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3641309 00:04:57.097 15:09:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3641309 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:59.001 00:04:59.001 real 0m10.991s 00:04:59.001 user 0m10.581s 00:04:59.001 sys 0m0.872s 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.001 ************************************ 00:04:59.001 END TEST skip_rpc_with_json 00:04:59.001 ************************************ 00:04:59.001 15:09:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:59.001 15:09:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.001 15:09:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.001 15:09:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.001 ************************************ 00:04:59.001 START TEST skip_rpc_with_delay 00:04:59.001 ************************************ 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:59.001 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.260 [2024-11-06 15:09:26.675066] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.260 00:04:59.260 real 0m0.143s 00:04:59.260 user 0m0.085s 00:04:59.260 sys 0m0.057s 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.260 15:09:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.260 ************************************ 00:04:59.260 END TEST skip_rpc_with_delay 00:04:59.260 ************************************ 00:04:59.260 15:09:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.260 15:09:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.260 15:09:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.260 15:09:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.260 15:09:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.260 15:09:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.260 ************************************ 00:04:59.260 START TEST exit_on_failed_rpc_init 00:04:59.260 ************************************ 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3642706 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3642706 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3642706 ']' 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.260 15:09:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.519 [2024-11-06 15:09:26.897817] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:04:59.519 [2024-11-06 15:09:26.897902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642706 ] 00:04:59.519 [2024-11-06 15:09:27.023807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.519 [2024-11-06 15:09:27.127315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.455 15:09:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.455 [2024-11-06 15:09:28.012811] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:00.455 [2024-11-06 15:09:28.012897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642778 ] 00:05:00.713 [2024-11-06 15:09:28.135508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.713 [2024-11-06 15:09:28.246382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.713 [2024-11-06 15:09:28.246451] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.713 [2024-11-06 15:09:28.246469] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.713 [2024-11-06 15:09:28.246479] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3642706 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3642706 ']' 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3642706 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3642706 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3642706' 00:05:00.972 killing process with pid 3642706 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3642706 00:05:00.972 15:09:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3642706 00:05:03.504 00:05:03.504 real 0m4.041s 00:05:03.504 user 0m4.384s 00:05:03.504 sys 0m0.613s 00:05:03.504 15:09:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.504 15:09:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.504 ************************************ 00:05:03.504 END TEST exit_on_failed_rpc_init 00:05:03.504 ************************************ 00:05:03.504 15:09:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.504 00:05:03.504 real 0m22.983s 00:05:03.504 user 0m22.223s 00:05:03.504 sys 0m2.229s 00:05:03.504 15:09:30 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.504 15:09:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.504 ************************************ 00:05:03.504 END TEST skip_rpc 00:05:03.504 ************************************ 00:05:03.504 15:09:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:03.504 15:09:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.504 15:09:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.504 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:05:03.504 ************************************ 00:05:03.504 START TEST rpc_client 00:05:03.504 ************************************ 00:05:03.504 15:09:30 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:03.504 * Looking for test storage... 00:05:03.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.504 15:09:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.504 --rc genhtml_branch_coverage=1 00:05:03.504 --rc genhtml_function_coverage=1 00:05:03.504 --rc genhtml_legend=1 00:05:03.504 --rc geninfo_all_blocks=1 00:05:03.504 --rc geninfo_unexecuted_blocks=1 00:05:03.504 00:05:03.504 ' 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.504 --rc genhtml_branch_coverage=1 00:05:03.504 --rc genhtml_function_coverage=1 00:05:03.504 --rc genhtml_legend=1 00:05:03.504 --rc geninfo_all_blocks=1 00:05:03.504 --rc geninfo_unexecuted_blocks=1 00:05:03.504 00:05:03.504 ' 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.504 --rc genhtml_branch_coverage=1 00:05:03.504 --rc genhtml_function_coverage=1 00:05:03.504 --rc genhtml_legend=1 00:05:03.504 --rc geninfo_all_blocks=1 00:05:03.504 --rc geninfo_unexecuted_blocks=1 00:05:03.504 00:05:03.504 ' 00:05:03.504 15:09:31 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.504 --rc genhtml_branch_coverage=1 00:05:03.504 --rc genhtml_function_coverage=1 00:05:03.504 --rc genhtml_legend=1 00:05:03.504 --rc geninfo_all_blocks=1 00:05:03.504 --rc geninfo_unexecuted_blocks=1 00:05:03.504 00:05:03.504 ' 00:05:03.504 15:09:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:03.812 OK 00:05:03.812 15:09:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.812 00:05:03.812 real 0m0.233s 00:05:03.812 user 0m0.134s 00:05:03.812 sys 0m0.113s 00:05:03.812 15:09:31 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.812 15:09:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.812 ************************************ 00:05:03.812 END TEST rpc_client 00:05:03.812 ************************************ 00:05:03.812 15:09:31 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.812 15:09:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.812 15:09:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.812 15:09:31 -- common/autotest_common.sh@10 -- # set +x 00:05:03.812 ************************************ 00:05:03.812 START TEST json_config 00:05:03.812 ************************************ 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.812 15:09:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.812 15:09:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.812 15:09:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.812 15:09:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.812 15:09:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.812 15:09:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.812 15:09:31 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.812 15:09:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.812 15:09:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.812 15:09:31 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.812 15:09:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.812 15:09:31 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.812 15:09:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.812 15:09:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.812 15:09:31 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.812 --rc genhtml_branch_coverage=1 00:05:03.812 --rc genhtml_function_coverage=1 00:05:03.812 --rc genhtml_legend=1 00:05:03.812 --rc geninfo_all_blocks=1 00:05:03.812 --rc geninfo_unexecuted_blocks=1 00:05:03.812 00:05:03.812 ' 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.812 --rc genhtml_branch_coverage=1 00:05:03.812 --rc genhtml_function_coverage=1 00:05:03.812 --rc genhtml_legend=1 00:05:03.812 --rc geninfo_all_blocks=1 00:05:03.812 --rc geninfo_unexecuted_blocks=1 00:05:03.812 00:05:03.812 ' 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.812 --rc genhtml_branch_coverage=1 00:05:03.812 --rc genhtml_function_coverage=1 00:05:03.812 --rc genhtml_legend=1 00:05:03.812 --rc geninfo_all_blocks=1 00:05:03.812 --rc geninfo_unexecuted_blocks=1 00:05:03.812 00:05:03.812 ' 00:05:03.812 15:09:31 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.812 --rc genhtml_branch_coverage=1 00:05:03.812 --rc genhtml_function_coverage=1 00:05:03.812 --rc genhtml_legend=1 00:05:03.812 --rc geninfo_all_blocks=1 00:05:03.812 --rc geninfo_unexecuted_blocks=1 00:05:03.812 00:05:03.812 ' 00:05:03.812 15:09:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.812 15:09:31 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.812 15:09:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.812 15:09:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.812 15:09:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.812 15:09:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.812 15:09:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.812 15:09:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.812 15:09:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.812 15:09:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.813 15:09:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.813 15:09:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:03.813 INFO: JSON configuration test init 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:03.813 15:09:31 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:03.813 15:09:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.813 15:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.072 15:09:31 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.072 15:09:31 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.072 15:09:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.072 15:09:31 json_config -- json_config/common.sh@10 -- # shift 00:05:04.072 15:09:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.072 15:09:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.072 15:09:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.072 15:09:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.072 15:09:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.072 15:09:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3643564 00:05:04.072 15:09:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.072 Waiting for target to run... 00:05:04.072 15:09:31 json_config -- json_config/common.sh@25 -- # waitforlisten 3643564 /var/tmp/spdk_tgt.sock 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@833 -- # '[' -z 3643564 ']' 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.072 15:09:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.072 15:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.072 [2024-11-06 15:09:31.541399] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:04.072 [2024-11-06 15:09:31.541493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643564 ] 00:05:04.332 [2024-11-06 15:09:31.856866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.332 [2024-11-06 15:09:31.953997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:04.900 15:09:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.900 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.900 15:09:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.900 15:09:32 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:04.900 15:09:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:09.088 15:09:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@54 -- # sort 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.088 15:09:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.088 15:09:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.088 MallocForNvmf0 00:05:09.088 15:09:36 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.088 15:09:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:09.346 MallocForNvmf1 00:05:09.346 15:09:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.346 15:09:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:09.346 [2024-11-06 15:09:36.927682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.347 15:09:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.347 15:09:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:09.605 15:09:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.605 15:09:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:09.883 15:09:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:09.883 15:09:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.141 15:09:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.141 15:09:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.141 [2024-11-06 15:09:37.710253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.141 15:09:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:10.141 15:09:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.141 15:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.141 15:09:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:10.141 15:09:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.141 15:09:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.399 15:09:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:10.399 15:09:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.399 15:09:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:10.399 MallocBdevForConfigChangeCheck 00:05:10.399 15:09:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:10.399 15:09:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.399 15:09:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:10.776 15:09:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:10.776 INFO: shutting down applications... 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:10.776 15:09:38 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.308 Calling clear_iscsi_subsystem 00:05:13.308 Calling clear_nvmf_subsystem 00:05:13.308 Calling clear_nbd_subsystem 00:05:13.308 Calling clear_ublk_subsystem 00:05:13.308 Calling clear_vhost_blk_subsystem 00:05:13.308 Calling clear_vhost_scsi_subsystem 00:05:13.308 Calling clear_bdev_subsystem 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@352 -- # break 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:13.308 15:09:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:13.308 15:09:40 json_config -- json_config/common.sh@31 -- # local app=target 00:05:13.308 15:09:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.308 15:09:40 json_config -- json_config/common.sh@35 -- # [[ -n 3643564 ]] 00:05:13.308 15:09:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3643564 00:05:13.308 15:09:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.308 15:09:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.308 15:09:40 json_config -- json_config/common.sh@41 -- # kill -0 3643564 00:05:13.308 15:09:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.876 15:09:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.876 15:09:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.876 15:09:41 json_config -- json_config/common.sh@41 -- # kill -0 3643564 00:05:13.876 15:09:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.444 15:09:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.444 15:09:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.444 15:09:41 json_config -- json_config/common.sh@41 -- # kill -0 3643564 00:05:14.444 15:09:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.444 15:09:41 json_config -- json_config/common.sh@43 -- # break 00:05:14.444 15:09:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.444 15:09:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.444 SPDK target shutdown done 00:05:14.444 15:09:41 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:14.444 INFO: relaunching applications... 00:05:14.444 15:09:41 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.444 15:09:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.444 15:09:41 json_config -- json_config/common.sh@10 -- # shift 00:05:14.444 15:09:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.444 15:09:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.444 15:09:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.444 15:09:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.444 15:09:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.444 15:09:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3645440 00:05:14.444 15:09:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.444 Waiting for target to run... 00:05:14.444 15:09:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.444 15:09:41 json_config -- json_config/common.sh@25 -- # waitforlisten 3645440 /var/tmp/spdk_tgt.sock 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@833 -- # '[' -z 3645440 ']' 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.444 15:09:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.444 [2024-11-06 15:09:41.935638] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:14.444 [2024-11-06 15:09:41.935741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645440 ] 00:05:15.012 [2024-11-06 15:09:42.437483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.012 [2024-11-06 15:09:42.548267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.202 [2024-11-06 15:09:46.239264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.202 [2024-11-06 15:09:46.271645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.202 15:09:46 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.202 15:09:46 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:19.202 15:09:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.202 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:19.202 INFO: Checking if target configuration is the same... 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:19.202 15:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.202 + '[' 2 -ne 2 ']' 00:05:19.202 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.202 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.202 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.202 +++ basename /dev/fd/62 00:05:19.202 ++ mktemp /tmp/62.XXX 00:05:19.202 + tmp_file_1=/tmp/62.cBo 00:05:19.202 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.202 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.202 + tmp_file_2=/tmp/spdk_tgt_config.json.yqs 00:05:19.202 + ret=0 00:05:19.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.202 + diff -u /tmp/62.cBo /tmp/spdk_tgt_config.json.yqs 00:05:19.202 + echo 'INFO: JSON config files are the same' 00:05:19.202 INFO: JSON config files are the same 00:05:19.202 + rm /tmp/62.cBo /tmp/spdk_tgt_config.json.yqs 00:05:19.202 + exit 0 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:19.202 15:09:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:19.202 INFO: changing configuration and checking if this can be detected... 00:05:19.203 15:09:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.203 15:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:19.461 15:09:46 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.461 15:09:46 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:19.461 15:09:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.461 + '[' 2 -ne 2 ']' 00:05:19.461 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:19.461 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:19.461 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:19.461 +++ basename /dev/fd/62 00:05:19.461 ++ mktemp /tmp/62.XXX 00:05:19.461 + tmp_file_1=/tmp/62.STz 00:05:19.461 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.461 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:19.461 + tmp_file_2=/tmp/spdk_tgt_config.json.Djp 00:05:19.461 + ret=0 00:05:19.461 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.720 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:19.720 + diff -u /tmp/62.STz /tmp/spdk_tgt_config.json.Djp 00:05:19.720 + ret=1 00:05:19.720 + echo '=== Start of file: /tmp/62.STz ===' 00:05:19.720 + cat /tmp/62.STz 00:05:19.720 + echo '=== End of file: /tmp/62.STz ===' 00:05:19.720 + echo '' 00:05:19.720 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Djp ===' 00:05:19.720 + cat /tmp/spdk_tgt_config.json.Djp 00:05:19.720 + echo '=== End of file: /tmp/spdk_tgt_config.json.Djp ===' 00:05:19.720 + echo '' 00:05:19.720 + rm /tmp/62.STz /tmp/spdk_tgt_config.json.Djp 00:05:19.720 + exit 1 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:19.720 INFO: configuration change detected. 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 3645440 ]] 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:19.720 15:09:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.720 15:09:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.978 15:09:47 json_config -- json_config/json_config.sh@330 -- # killprocess 3645440 00:05:19.978 15:09:47 json_config -- common/autotest_common.sh@952 -- # '[' -z 3645440 ']' 00:05:19.978 15:09:47 json_config -- common/autotest_common.sh@956 -- # kill -0 3645440 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@957 -- # uname 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3645440 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3645440' 00:05:19.979 killing process with pid 3645440 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@971 -- # kill 3645440 00:05:19.979 15:09:47 json_config -- common/autotest_common.sh@976 -- # wait 3645440 00:05:23.267 15:09:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.268 15:09:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.268 15:09:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.268 15:09:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.268 15:09:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.268 15:09:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.268 INFO: Success 00:05:23.268 00:05:23.268 real 0m19.022s 00:05:23.268 user 0m19.588s 00:05:23.268 sys 0m2.827s 00:05:23.268 15:09:50 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.268 15:09:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.268 ************************************ 00:05:23.268 END TEST json_config 00:05:23.268 ************************************ 00:05:23.268 15:09:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.268 15:09:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.268 15:09:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.268 15:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:23.268 ************************************ 00:05:23.268 START TEST json_config_extra_key 00:05:23.268 ************************************ 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.268 --rc genhtml_branch_coverage=1 00:05:23.268 --rc genhtml_function_coverage=1 00:05:23.268 --rc genhtml_legend=1 00:05:23.268 --rc geninfo_all_blocks=1 00:05:23.268 --rc geninfo_unexecuted_blocks=1 00:05:23.268 00:05:23.268 ' 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.268 --rc genhtml_branch_coverage=1 00:05:23.268 --rc genhtml_function_coverage=1 00:05:23.268 --rc genhtml_legend=1 00:05:23.268 --rc geninfo_all_blocks=1 00:05:23.268 --rc geninfo_unexecuted_blocks=1 00:05:23.268 00:05:23.268 ' 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.268 --rc genhtml_branch_coverage=1 00:05:23.268 --rc genhtml_function_coverage=1 00:05:23.268 --rc genhtml_legend=1 00:05:23.268 --rc geninfo_all_blocks=1 00:05:23.268 --rc geninfo_unexecuted_blocks=1 00:05:23.268 00:05:23.268 ' 00:05:23.268 15:09:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.268 --rc genhtml_branch_coverage=1 00:05:23.268 --rc genhtml_function_coverage=1 00:05:23.268 --rc genhtml_legend=1 00:05:23.268 --rc geninfo_all_blocks=1 00:05:23.268 --rc geninfo_unexecuted_blocks=1 00:05:23.268 00:05:23.268 ' 00:05:23.268 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.268 15:09:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.268 15:09:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.268 15:09:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.268 15:09:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.268 15:09:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.268 15:09:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.268 15:09:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.268 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.268 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.268 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.269 INFO: launching applications... 00:05:23.269 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3647041 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.269 Waiting for target to run... 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3647041 /var/tmp/spdk_tgt.sock 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3647041 ']' 00:05:23.269 15:09:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.269 15:09:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.269 [2024-11-06 15:09:50.624686] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:23.269 [2024-11-06 15:09:50.624775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647041 ] 00:05:23.528 [2024-11-06 15:09:50.980293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.528 [2024-11-06 15:09:51.076675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.463 15:09:51 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.463 15:09:51 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.463 00:05:24.463 15:09:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.463 INFO: shutting down applications... 00:05:24.463 15:09:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3647041 ]] 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3647041 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:24.463 15:09:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.722 15:09:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.722 15:09:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.722 15:09:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:24.722 15:09:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.289 15:09:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.289 15:09:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.289 15:09:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:25.289 15:09:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.855 15:09:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.855 15:09:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.855 15:09:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:25.855 15:09:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.422 15:09:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.422 15:09:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.422 15:09:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:26.422 15:09:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.681 15:09:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.681 15:09:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.681 15:09:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:26.681 15:09:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3647041 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.248 15:09:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.248 SPDK target shutdown done 00:05:27.248 15:09:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.248 Success 00:05:27.248 00:05:27.248 real 0m4.451s 00:05:27.248 user 0m3.886s 00:05:27.248 sys 0m0.537s 00:05:27.248 15:09:54 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.248 15:09:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.248 ************************************ 00:05:27.248 END TEST json_config_extra_key 00:05:27.248 ************************************ 00:05:27.248 15:09:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.248 15:09:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.248 15:09:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.248 15:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:27.248 ************************************ 00:05:27.248 START TEST alias_rpc 00:05:27.248 ************************************ 00:05:27.248 15:09:54 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.525 * Looking for test storage... 00:05:27.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.525 15:09:54 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.525 15:09:54 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.525 15:09:54 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.525 15:09:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.525 --rc genhtml_branch_coverage=1 00:05:27.525 --rc genhtml_function_coverage=1 00:05:27.525 --rc genhtml_legend=1 00:05:27.525 --rc geninfo_all_blocks=1 00:05:27.525 --rc geninfo_unexecuted_blocks=1 00:05:27.525 00:05:27.525 ' 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.525 --rc genhtml_branch_coverage=1 00:05:27.525 --rc genhtml_function_coverage=1 00:05:27.525 --rc genhtml_legend=1 00:05:27.525 --rc geninfo_all_blocks=1 00:05:27.525 --rc geninfo_unexecuted_blocks=1 00:05:27.525 00:05:27.525 ' 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.525 --rc genhtml_branch_coverage=1 00:05:27.525 --rc genhtml_function_coverage=1 00:05:27.525 --rc genhtml_legend=1 00:05:27.525 --rc geninfo_all_blocks=1 00:05:27.525 --rc geninfo_unexecuted_blocks=1 00:05:27.525 00:05:27.525 ' 00:05:27.525 15:09:55 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.525 --rc genhtml_branch_coverage=1 00:05:27.526 --rc genhtml_function_coverage=1 00:05:27.526 --rc genhtml_legend=1 00:05:27.526 --rc geninfo_all_blocks=1 00:05:27.526 --rc geninfo_unexecuted_blocks=1 00:05:27.526 00:05:27.526 ' 00:05:27.526 15:09:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.526 15:09:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3647802 00:05:27.526 15:09:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.526 15:09:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3647802 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3647802 ']' 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.526 15:09:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 [2024-11-06 15:09:55.128911] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:27.526 [2024-11-06 15:09:55.129000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647802 ] 00:05:27.817 [2024-11-06 15:09:55.252595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.817 [2024-11-06 15:09:55.357094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:28.894 15:09:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.894 15:09:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3647802 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3647802 ']' 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3647802 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3647802 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3647802' 00:05:28.894 killing process with pid 3647802 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@971 -- # kill 3647802 00:05:28.894 15:09:56 alias_rpc -- common/autotest_common.sh@976 -- # wait 3647802 00:05:31.427 00:05:31.427 real 0m3.894s 00:05:31.427 user 0m3.957s 00:05:31.427 sys 0m0.542s 00:05:31.427 15:09:58 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.427 15:09:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 END TEST alias_rpc 00:05:31.427 ************************************ 00:05:31.427 15:09:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:31.427 15:09:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.427 15:09:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.427 15:09:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.427 15:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.427 ************************************ 00:05:31.427 START TEST spdkcli_tcp 00:05:31.427 ************************************ 00:05:31.427 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:31.427 * Looking for test storage... 00:05:31.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:31.427 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.427 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.427 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.427 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.427 15:09:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.427 15:09:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:31.427 15:09:59 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.427 15:09:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.427 --rc genhtml_branch_coverage=1 00:05:31.427 --rc genhtml_function_coverage=1 00:05:31.427 --rc genhtml_legend=1 00:05:31.427 --rc geninfo_all_blocks=1 00:05:31.427 --rc geninfo_unexecuted_blocks=1 00:05:31.427 00:05:31.427 ' 00:05:31.427 15:09:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.427 --rc genhtml_branch_coverage=1 00:05:31.427 --rc genhtml_function_coverage=1 00:05:31.427 --rc genhtml_legend=1 00:05:31.427 --rc geninfo_all_blocks=1 00:05:31.427 --rc geninfo_unexecuted_blocks=1 00:05:31.427 00:05:31.427 ' 00:05:31.427 15:09:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.427 --rc genhtml_branch_coverage=1 00:05:31.427 --rc genhtml_function_coverage=1 00:05:31.427 --rc genhtml_legend=1 00:05:31.427 --rc geninfo_all_blocks=1 00:05:31.427 --rc geninfo_unexecuted_blocks=1 00:05:31.427 00:05:31.427 ' 00:05:31.427 15:09:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.427 --rc genhtml_branch_coverage=1 00:05:31.427 --rc genhtml_function_coverage=1 00:05:31.427 --rc genhtml_legend=1 00:05:31.427 --rc geninfo_all_blocks=1 00:05:31.427 --rc geninfo_unexecuted_blocks=1 00:05:31.427 00:05:31.427 ' 00:05:31.427 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3648555 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3648555 00:05:31.428 15:09:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3648555 ']' 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.428 15:09:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.686 [2024-11-06 15:09:59.100300] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:31.687 [2024-11-06 15:09:59.100409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648555 ] 00:05:31.687 [2024-11-06 15:09:59.222296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.945 [2024-11-06 15:09:59.328595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.945 [2024-11-06 15:09:59.328617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.594 15:10:00 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.594 15:10:00 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:32.594 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3648790 00:05:32.594 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.594 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:32.852 [ 00:05:32.852 "bdev_malloc_delete", 00:05:32.852 "bdev_malloc_create", 00:05:32.852 "bdev_null_resize", 00:05:32.852 "bdev_null_delete", 00:05:32.852 "bdev_null_create", 00:05:32.852 "bdev_nvme_cuse_unregister", 00:05:32.852 "bdev_nvme_cuse_register", 00:05:32.852 "bdev_opal_new_user", 00:05:32.852 "bdev_opal_set_lock_state", 00:05:32.852 "bdev_opal_delete", 00:05:32.852 "bdev_opal_get_info", 00:05:32.852 "bdev_opal_create", 00:05:32.852 "bdev_nvme_opal_revert", 00:05:32.852 "bdev_nvme_opal_init", 00:05:32.852 "bdev_nvme_send_cmd", 00:05:32.852 "bdev_nvme_set_keys", 00:05:32.852 "bdev_nvme_get_path_iostat", 00:05:32.852 "bdev_nvme_get_mdns_discovery_info", 00:05:32.852 "bdev_nvme_stop_mdns_discovery", 00:05:32.852 "bdev_nvme_start_mdns_discovery", 00:05:32.852 "bdev_nvme_set_multipath_policy", 00:05:32.852 "bdev_nvme_set_preferred_path", 00:05:32.852 "bdev_nvme_get_io_paths", 00:05:32.852 "bdev_nvme_remove_error_injection", 00:05:32.852 "bdev_nvme_add_error_injection", 00:05:32.852 "bdev_nvme_get_discovery_info", 00:05:32.852 "bdev_nvme_stop_discovery", 00:05:32.852 "bdev_nvme_start_discovery", 00:05:32.852 "bdev_nvme_get_controller_health_info", 00:05:32.852 "bdev_nvme_disable_controller", 00:05:32.852 "bdev_nvme_enable_controller", 00:05:32.852 "bdev_nvme_reset_controller", 00:05:32.852 "bdev_nvme_get_transport_statistics", 00:05:32.852 "bdev_nvme_apply_firmware", 00:05:32.852 "bdev_nvme_detach_controller", 00:05:32.852 "bdev_nvme_get_controllers", 00:05:32.852 "bdev_nvme_attach_controller", 00:05:32.852 "bdev_nvme_set_hotplug", 00:05:32.852 "bdev_nvme_set_options", 00:05:32.852 "bdev_passthru_delete", 00:05:32.852 "bdev_passthru_create", 00:05:32.852 "bdev_lvol_set_parent_bdev", 00:05:32.852 "bdev_lvol_set_parent", 00:05:32.852 "bdev_lvol_check_shallow_copy", 00:05:32.852 "bdev_lvol_start_shallow_copy", 00:05:32.852 "bdev_lvol_grow_lvstore", 00:05:32.852 "bdev_lvol_get_lvols", 00:05:32.852 "bdev_lvol_get_lvstores", 00:05:32.852 "bdev_lvol_delete", 00:05:32.852 "bdev_lvol_set_read_only", 00:05:32.852 "bdev_lvol_resize", 00:05:32.852 "bdev_lvol_decouple_parent", 00:05:32.852 "bdev_lvol_inflate", 00:05:32.852 "bdev_lvol_rename", 00:05:32.852 "bdev_lvol_clone_bdev", 00:05:32.852 "bdev_lvol_clone", 00:05:32.852 "bdev_lvol_snapshot", 00:05:32.852 "bdev_lvol_create", 00:05:32.852 "bdev_lvol_delete_lvstore", 00:05:32.852 "bdev_lvol_rename_lvstore", 00:05:32.852 "bdev_lvol_create_lvstore", 00:05:32.852 "bdev_raid_set_options", 00:05:32.852 "bdev_raid_remove_base_bdev", 00:05:32.852 "bdev_raid_add_base_bdev", 00:05:32.852 "bdev_raid_delete", 00:05:32.852 "bdev_raid_create", 00:05:32.852 "bdev_raid_get_bdevs", 00:05:32.852 "bdev_error_inject_error", 00:05:32.852 "bdev_error_delete", 00:05:32.852 "bdev_error_create", 00:05:32.852 "bdev_split_delete", 00:05:32.852 "bdev_split_create", 00:05:32.852 "bdev_delay_delete", 00:05:32.852 "bdev_delay_create", 00:05:32.852 "bdev_delay_update_latency", 00:05:32.852 "bdev_zone_block_delete", 00:05:32.852 "bdev_zone_block_create", 00:05:32.852 "blobfs_create", 00:05:32.852 "blobfs_detect", 00:05:32.852 "blobfs_set_cache_size", 00:05:32.852 "bdev_aio_delete", 00:05:32.852 "bdev_aio_rescan", 00:05:32.852 "bdev_aio_create", 00:05:32.852 "bdev_ftl_set_property", 00:05:32.852 "bdev_ftl_get_properties", 00:05:32.852 "bdev_ftl_get_stats", 00:05:32.852 "bdev_ftl_unmap", 00:05:32.852 "bdev_ftl_unload", 00:05:32.852 "bdev_ftl_delete", 00:05:32.852 "bdev_ftl_load", 00:05:32.852 "bdev_ftl_create", 00:05:32.852 "bdev_virtio_attach_controller", 00:05:32.852 "bdev_virtio_scsi_get_devices", 00:05:32.852 "bdev_virtio_detach_controller", 00:05:32.852 "bdev_virtio_blk_set_hotplug", 00:05:32.852 "bdev_iscsi_delete", 00:05:32.852 "bdev_iscsi_create", 00:05:32.852 "bdev_iscsi_set_options", 00:05:32.852 "accel_error_inject_error", 00:05:32.852 "ioat_scan_accel_module", 00:05:32.852 "dsa_scan_accel_module", 00:05:32.852 "iaa_scan_accel_module", 00:05:32.852 "keyring_file_remove_key", 00:05:32.852 "keyring_file_add_key", 00:05:32.852 "keyring_linux_set_options", 00:05:32.852 "fsdev_aio_delete", 00:05:32.852 "fsdev_aio_create", 00:05:32.852 "iscsi_get_histogram", 00:05:32.852 "iscsi_enable_histogram", 00:05:32.852 "iscsi_set_options", 00:05:32.852 "iscsi_get_auth_groups", 00:05:32.852 "iscsi_auth_group_remove_secret", 00:05:32.852 "iscsi_auth_group_add_secret", 00:05:32.852 "iscsi_delete_auth_group", 00:05:32.852 "iscsi_create_auth_group", 00:05:32.852 "iscsi_set_discovery_auth", 00:05:32.852 "iscsi_get_options", 00:05:32.852 "iscsi_target_node_request_logout", 00:05:32.852 "iscsi_target_node_set_redirect", 00:05:32.852 "iscsi_target_node_set_auth", 00:05:32.852 "iscsi_target_node_add_lun", 00:05:32.852 "iscsi_get_stats", 00:05:32.852 "iscsi_get_connections", 00:05:32.852 "iscsi_portal_group_set_auth", 00:05:32.852 "iscsi_start_portal_group", 00:05:32.852 "iscsi_delete_portal_group", 00:05:32.852 "iscsi_create_portal_group", 00:05:32.852 "iscsi_get_portal_groups", 00:05:32.852 "iscsi_delete_target_node", 00:05:32.852 "iscsi_target_node_remove_pg_ig_maps", 00:05:32.852 "iscsi_target_node_add_pg_ig_maps", 00:05:32.852 "iscsi_create_target_node", 00:05:32.852 "iscsi_get_target_nodes", 00:05:32.852 "iscsi_delete_initiator_group", 00:05:32.852 "iscsi_initiator_group_remove_initiators", 00:05:32.852 "iscsi_initiator_group_add_initiators", 00:05:32.852 "iscsi_create_initiator_group", 00:05:32.852 "iscsi_get_initiator_groups", 00:05:32.852 "nvmf_set_crdt", 00:05:32.852 "nvmf_set_config", 00:05:32.852 "nvmf_set_max_subsystems", 00:05:32.852 "nvmf_stop_mdns_prr", 00:05:32.852 "nvmf_publish_mdns_prr", 00:05:32.852 "nvmf_subsystem_get_listeners", 00:05:32.852 "nvmf_subsystem_get_qpairs", 00:05:32.852 "nvmf_subsystem_get_controllers", 00:05:32.852 "nvmf_get_stats", 00:05:32.852 "nvmf_get_transports", 00:05:32.852 "nvmf_create_transport", 00:05:32.852 "nvmf_get_targets", 00:05:32.852 "nvmf_delete_target", 00:05:32.852 "nvmf_create_target", 00:05:32.852 "nvmf_subsystem_allow_any_host", 00:05:32.852 "nvmf_subsystem_set_keys", 00:05:32.852 "nvmf_subsystem_remove_host", 00:05:32.852 "nvmf_subsystem_add_host", 00:05:32.852 "nvmf_ns_remove_host", 00:05:32.852 "nvmf_ns_add_host", 00:05:32.852 "nvmf_subsystem_remove_ns", 00:05:32.852 "nvmf_subsystem_set_ns_ana_group", 00:05:32.852 "nvmf_subsystem_add_ns", 00:05:32.852 "nvmf_subsystem_listener_set_ana_state", 00:05:32.852 "nvmf_discovery_get_referrals", 00:05:32.852 "nvmf_discovery_remove_referral", 00:05:32.852 "nvmf_discovery_add_referral", 00:05:32.852 "nvmf_subsystem_remove_listener", 00:05:32.852 "nvmf_subsystem_add_listener", 00:05:32.852 "nvmf_delete_subsystem", 00:05:32.852 "nvmf_create_subsystem", 00:05:32.852 "nvmf_get_subsystems", 00:05:32.852 "env_dpdk_get_mem_stats", 00:05:32.852 "nbd_get_disks", 00:05:32.852 "nbd_stop_disk", 00:05:32.852 "nbd_start_disk", 00:05:32.852 "ublk_recover_disk", 00:05:32.852 "ublk_get_disks", 00:05:32.852 "ublk_stop_disk", 00:05:32.852 "ublk_start_disk", 00:05:32.852 "ublk_destroy_target", 00:05:32.852 "ublk_create_target", 00:05:32.852 "virtio_blk_create_transport", 00:05:32.852 "virtio_blk_get_transports", 00:05:32.852 "vhost_controller_set_coalescing", 00:05:32.852 "vhost_get_controllers", 00:05:32.852 "vhost_delete_controller", 00:05:32.852 "vhost_create_blk_controller", 00:05:32.852 "vhost_scsi_controller_remove_target", 00:05:32.852 "vhost_scsi_controller_add_target", 00:05:32.852 "vhost_start_scsi_controller", 00:05:32.852 "vhost_create_scsi_controller", 00:05:32.852 "thread_set_cpumask", 00:05:32.852 "scheduler_set_options", 00:05:32.852 "framework_get_governor", 00:05:32.852 "framework_get_scheduler", 00:05:32.852 "framework_set_scheduler", 00:05:32.852 "framework_get_reactors", 00:05:32.852 "thread_get_io_channels", 00:05:32.852 "thread_get_pollers", 00:05:32.852 "thread_get_stats", 00:05:32.852 "framework_monitor_context_switch", 00:05:32.852 "spdk_kill_instance", 00:05:32.852 "log_enable_timestamps", 00:05:32.852 "log_get_flags", 00:05:32.852 "log_clear_flag", 00:05:32.852 "log_set_flag", 00:05:32.852 "log_get_level", 00:05:32.852 "log_set_level", 00:05:32.852 "log_get_print_level", 00:05:32.852 "log_set_print_level", 00:05:32.852 "framework_enable_cpumask_locks", 00:05:32.852 "framework_disable_cpumask_locks", 00:05:32.852 "framework_wait_init", 00:05:32.852 "framework_start_init", 00:05:32.852 "scsi_get_devices", 00:05:32.852 "bdev_get_histogram", 00:05:32.852 "bdev_enable_histogram", 00:05:32.852 "bdev_set_qos_limit", 00:05:32.852 "bdev_set_qd_sampling_period", 00:05:32.852 "bdev_get_bdevs", 00:05:32.852 "bdev_reset_iostat", 00:05:32.852 "bdev_get_iostat", 00:05:32.852 "bdev_examine", 00:05:32.852 "bdev_wait_for_examine", 00:05:32.852 "bdev_set_options", 00:05:32.852 "accel_get_stats", 00:05:32.852 "accel_set_options", 00:05:32.852 "accel_set_driver", 00:05:32.852 "accel_crypto_key_destroy", 00:05:32.852 "accel_crypto_keys_get", 00:05:32.852 "accel_crypto_key_create", 00:05:32.852 "accel_assign_opc", 00:05:32.852 "accel_get_module_info", 00:05:32.852 "accel_get_opc_assignments", 00:05:32.852 "vmd_rescan", 00:05:32.853 "vmd_remove_device", 00:05:32.853 "vmd_enable", 00:05:32.853 "sock_get_default_impl", 00:05:32.853 "sock_set_default_impl", 00:05:32.853 "sock_impl_set_options", 00:05:32.853 "sock_impl_get_options", 00:05:32.853 "iobuf_get_stats", 00:05:32.853 "iobuf_set_options", 00:05:32.853 "keyring_get_keys", 00:05:32.853 "framework_get_pci_devices", 00:05:32.853 "framework_get_config", 00:05:32.853 "framework_get_subsystems", 00:05:32.853 "fsdev_set_opts", 00:05:32.853 "fsdev_get_opts", 00:05:32.853 "trace_get_info", 00:05:32.853 "trace_get_tpoint_group_mask", 00:05:32.853 "trace_disable_tpoint_group", 00:05:32.853 "trace_enable_tpoint_group", 00:05:32.853 "trace_clear_tpoint_mask", 00:05:32.853 "trace_set_tpoint_mask", 00:05:32.853 "notify_get_notifications", 00:05:32.853 "notify_get_types", 00:05:32.853 "spdk_get_version", 00:05:32.853 "rpc_get_methods" 00:05:32.853 ] 00:05:32.853 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.853 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:32.853 15:10:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3648555 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3648555 ']' 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3648555 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3648555 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3648555' 00:05:32.853 killing process with pid 3648555 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3648555 00:05:32.853 15:10:00 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3648555 00:05:35.383 00:05:35.383 real 0m4.004s 00:05:35.383 user 0m7.324s 00:05:35.383 sys 0m0.600s 00:05:35.383 15:10:02 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.383 15:10:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.383 ************************************ 00:05:35.383 END TEST spdkcli_tcp 00:05:35.383 ************************************ 00:05:35.383 15:10:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.383 15:10:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.383 15:10:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.383 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.383 ************************************ 00:05:35.383 START TEST dpdk_mem_utility 00:05:35.383 ************************************ 00:05:35.383 15:10:02 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.383 * Looking for test storage... 00:05:35.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:35.383 15:10:02 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.383 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.383 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.642 15:10:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.642 --rc genhtml_branch_coverage=1 00:05:35.642 --rc genhtml_function_coverage=1 00:05:35.642 --rc genhtml_legend=1 00:05:35.642 --rc geninfo_all_blocks=1 00:05:35.642 --rc geninfo_unexecuted_blocks=1 00:05:35.642 00:05:35.642 ' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.642 --rc genhtml_branch_coverage=1 00:05:35.642 --rc genhtml_function_coverage=1 00:05:35.642 --rc genhtml_legend=1 00:05:35.642 --rc geninfo_all_blocks=1 00:05:35.642 --rc geninfo_unexecuted_blocks=1 00:05:35.642 00:05:35.642 ' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.642 --rc genhtml_branch_coverage=1 00:05:35.642 --rc genhtml_function_coverage=1 00:05:35.642 --rc genhtml_legend=1 00:05:35.642 --rc geninfo_all_blocks=1 00:05:35.642 --rc geninfo_unexecuted_blocks=1 00:05:35.642 00:05:35.642 ' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.642 --rc genhtml_branch_coverage=1 00:05:35.642 --rc genhtml_function_coverage=1 00:05:35.642 --rc genhtml_legend=1 00:05:35.642 --rc geninfo_all_blocks=1 00:05:35.642 --rc geninfo_unexecuted_blocks=1 00:05:35.642 00:05:35.642 ' 00:05:35.642 15:10:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.642 15:10:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3649321 00:05:35.642 15:10:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.642 15:10:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3649321 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3649321 ']' 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.642 15:10:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.642 [2024-11-06 15:10:03.166495] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:35.642 [2024-11-06 15:10:03.166589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649321 ] 00:05:35.901 [2024-11-06 15:10:03.289317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.901 [2024-11-06 15:10:03.389521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.838 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.838 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:36.838 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:36.838 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:36.838 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.838 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.838 { 00:05:36.838 "filename": "/tmp/spdk_mem_dump.txt" 00:05:36.838 } 00:05:36.838 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.838 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.838 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:36.838 1 heaps totaling size 816.000000 MiB 00:05:36.838 size: 816.000000 MiB heap id: 0 00:05:36.838 end heaps---------- 00:05:36.838 9 mempools totaling size 595.772034 MiB 00:05:36.838 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:36.838 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:36.838 size: 92.545471 MiB name: bdev_io_3649321 00:05:36.838 size: 50.003479 MiB name: msgpool_3649321 00:05:36.838 size: 36.509338 MiB name: fsdev_io_3649321 00:05:36.838 size: 21.763794 MiB name: PDU_Pool 00:05:36.838 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:36.838 size: 4.133484 MiB name: evtpool_3649321 00:05:36.838 size: 0.026123 MiB name: Session_Pool 00:05:36.838 end mempools------- 00:05:36.838 6 memzones totaling size 4.142822 MiB 00:05:36.838 size: 1.000366 MiB name: RG_ring_0_3649321 00:05:36.838 size: 1.000366 MiB name: RG_ring_1_3649321 00:05:36.838 size: 1.000366 MiB name: RG_ring_4_3649321 00:05:36.839 size: 1.000366 MiB name: RG_ring_5_3649321 00:05:36.839 size: 0.125366 MiB name: RG_ring_2_3649321 00:05:36.839 size: 0.015991 MiB name: RG_ring_3_3649321 00:05:36.839 end memzones------- 00:05:36.839 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:36.839 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:36.839 list of free elements. size: 16.857605 MiB 00:05:36.839 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:36.839 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:36.839 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:36.839 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:36.839 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:36.839 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:36.839 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:36.839 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:36.839 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:36.839 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:36.839 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:36.839 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:36.839 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:36.839 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:36.839 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:36.839 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:36.839 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:36.839 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:36.839 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:36.839 list of standard malloc elements. size: 199.221497 MiB 00:05:36.839 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:36.839 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:36.839 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:36.839 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:36.839 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:36.839 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:36.839 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:36.839 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:36.839 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:36.839 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:36.839 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:36.839 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:36.839 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:36.839 list of memzone associated elements. size: 599.920898 MiB 00:05:36.839 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:36.839 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:36.839 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:36.839 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:36.839 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:36.839 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3649321_0 00:05:36.839 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:36.839 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3649321_0 00:05:36.839 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:36.839 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3649321_0 00:05:36.839 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:36.839 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:36.839 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:36.839 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:36.839 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:36.839 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3649321_0 00:05:36.839 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:36.839 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3649321 00:05:36.839 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:36.839 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3649321 00:05:36.839 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:36.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:36.839 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:36.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:36.839 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:36.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:36.839 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:36.839 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:36.839 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:36.839 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3649321 00:05:36.839 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:36.839 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3649321 00:05:36.839 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:36.839 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3649321 00:05:36.839 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:36.839 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3649321 00:05:36.839 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:36.839 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3649321 00:05:36.839 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:36.839 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3649321 00:05:36.839 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:36.839 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:36.839 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:36.839 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:36.839 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:36.839 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:36.839 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:36.839 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3649321 00:05:36.839 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:36.839 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3649321 00:05:36.839 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:36.839 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:36.839 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:36.839 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:36.839 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:36.839 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3649321 00:05:36.839 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:36.839 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:36.839 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:36.839 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3649321 00:05:36.839 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:36.839 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3649321 00:05:36.839 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:36.839 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3649321 00:05:36.839 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:36.839 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:36.839 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:36.839 15:10:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3649321 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3649321 ']' 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3649321 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3649321 00:05:36.839 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:36.840 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:36.840 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3649321' 00:05:36.840 killing process with pid 3649321 00:05:36.840 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3649321 00:05:36.840 15:10:04 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3649321 00:05:39.374 00:05:39.374 real 0m3.808s 00:05:39.374 user 0m3.756s 00:05:39.374 sys 0m0.578s 00:05:39.374 15:10:06 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.374 15:10:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.374 ************************************ 00:05:39.374 END TEST dpdk_mem_utility 00:05:39.374 ************************************ 00:05:39.374 15:10:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.374 15:10:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.374 15:10:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.374 15:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:39.374 ************************************ 00:05:39.374 START TEST event 00:05:39.374 ************************************ 00:05:39.374 15:10:06 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:39.374 * Looking for test storage... 00:05:39.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.374 15:10:06 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.374 15:10:06 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.374 15:10:06 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.374 15:10:06 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.375 15:10:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.375 15:10:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.375 15:10:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.375 15:10:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.375 15:10:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.375 15:10:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.375 15:10:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.375 15:10:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.375 15:10:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.375 15:10:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.375 15:10:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.375 15:10:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.375 15:10:06 event -- scripts/common.sh@345 -- # : 1 00:05:39.375 15:10:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.375 15:10:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.375 15:10:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.375 15:10:06 event -- scripts/common.sh@353 -- # local d=1 00:05:39.375 15:10:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.375 15:10:06 event -- scripts/common.sh@355 -- # echo 1 00:05:39.375 15:10:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.375 15:10:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.375 15:10:06 event -- scripts/common.sh@353 -- # local d=2 00:05:39.375 15:10:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.375 15:10:06 event -- scripts/common.sh@355 -- # echo 2 00:05:39.375 15:10:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.375 15:10:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.375 15:10:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.375 15:10:06 event -- scripts/common.sh@368 -- # return 0 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.375 --rc genhtml_branch_coverage=1 00:05:39.375 --rc genhtml_function_coverage=1 00:05:39.375 --rc genhtml_legend=1 00:05:39.375 --rc geninfo_all_blocks=1 00:05:39.375 --rc geninfo_unexecuted_blocks=1 00:05:39.375 00:05:39.375 ' 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.375 --rc genhtml_branch_coverage=1 00:05:39.375 --rc genhtml_function_coverage=1 00:05:39.375 --rc genhtml_legend=1 00:05:39.375 --rc geninfo_all_blocks=1 00:05:39.375 --rc geninfo_unexecuted_blocks=1 00:05:39.375 00:05:39.375 ' 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.375 --rc genhtml_branch_coverage=1 00:05:39.375 --rc genhtml_function_coverage=1 00:05:39.375 --rc genhtml_legend=1 00:05:39.375 --rc geninfo_all_blocks=1 00:05:39.375 --rc geninfo_unexecuted_blocks=1 00:05:39.375 00:05:39.375 ' 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.375 --rc genhtml_branch_coverage=1 00:05:39.375 --rc genhtml_function_coverage=1 00:05:39.375 --rc genhtml_legend=1 00:05:39.375 --rc geninfo_all_blocks=1 00:05:39.375 --rc geninfo_unexecuted_blocks=1 00:05:39.375 00:05:39.375 ' 00:05:39.375 15:10:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:39.375 15:10:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.375 15:10:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:39.375 15:10:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.375 15:10:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.375 ************************************ 00:05:39.375 START TEST event_perf 00:05:39.375 ************************************ 00:05:39.375 15:10:06 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.634 Running I/O for 1 seconds...[2024-11-06 15:10:07.030573] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:39.634 [2024-11-06 15:10:07.030643] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650075 ] 00:05:39.634 [2024-11-06 15:10:07.152314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.634 [2024-11-06 15:10:07.262957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.634 [2024-11-06 15:10:07.263032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.634 [2024-11-06 15:10:07.263101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.634 [2024-11-06 15:10:07.263123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.011 Running I/O for 1 seconds... 00:05:41.011 lcore 0: 207663 00:05:41.011 lcore 1: 207663 00:05:41.011 lcore 2: 207663 00:05:41.011 lcore 3: 207663 00:05:41.011 done. 00:05:41.011 00:05:41.011 real 0m1.492s 00:05:41.011 user 0m4.353s 00:05:41.011 sys 0m0.134s 00:05:41.011 15:10:08 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.011 15:10:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.011 ************************************ 00:05:41.011 END TEST event_perf 00:05:41.011 ************************************ 00:05:41.011 15:10:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.011 15:10:08 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:41.011 15:10:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.011 15:10:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.011 ************************************ 00:05:41.011 START TEST event_reactor 00:05:41.011 ************************************ 00:05:41.011 15:10:08 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:41.011 [2024-11-06 15:10:08.594843] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:41.011 [2024-11-06 15:10:08.594919] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650332 ] 00:05:41.270 [2024-11-06 15:10:08.715258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.270 [2024-11-06 15:10:08.823144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.645 test_start 00:05:42.645 oneshot 00:05:42.645 tick 100 00:05:42.645 tick 100 00:05:42.645 tick 250 00:05:42.645 tick 100 00:05:42.645 tick 100 00:05:42.645 tick 250 00:05:42.645 tick 100 00:05:42.645 tick 500 00:05:42.645 tick 100 00:05:42.645 tick 100 00:05:42.645 tick 250 00:05:42.645 tick 100 00:05:42.645 tick 100 00:05:42.645 test_end 00:05:42.645 00:05:42.645 real 0m1.482s 00:05:42.645 user 0m1.345s 00:05:42.645 sys 0m0.130s 00:05:42.645 15:10:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.645 15:10:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.645 ************************************ 00:05:42.645 END TEST event_reactor 00:05:42.645 ************************************ 00:05:42.645 15:10:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.645 15:10:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:42.645 15:10:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.645 15:10:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.645 ************************************ 00:05:42.645 START TEST event_reactor_perf 00:05:42.645 ************************************ 00:05:42.645 15:10:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.645 [2024-11-06 15:10:10.143767] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:42.645 [2024-11-06 15:10:10.143851] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650589 ] 00:05:42.645 [2024-11-06 15:10:10.267051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.905 [2024-11-06 15:10:10.373039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.282 test_start 00:05:44.282 test_end 00:05:44.282 Performance: 398412 events per second 00:05:44.282 00:05:44.282 real 0m1.481s 00:05:44.282 user 0m1.351s 00:05:44.282 sys 0m0.123s 00:05:44.282 15:10:11 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.282 15:10:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.282 ************************************ 00:05:44.282 END TEST event_reactor_perf 00:05:44.282 ************************************ 00:05:44.282 15:10:11 event -- event/event.sh@49 -- # uname -s 00:05:44.282 15:10:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:44.282 15:10:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.282 15:10:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.282 15:10:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.282 15:10:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.282 ************************************ 00:05:44.282 START TEST event_scheduler 00:05:44.282 ************************************ 00:05:44.282 15:10:11 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:44.282 * Looking for test storage... 00:05:44.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:44.282 15:10:11 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.282 15:10:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.282 15:10:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.282 15:10:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.282 15:10:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.283 15:10:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.283 --rc genhtml_branch_coverage=1 00:05:44.283 --rc genhtml_function_coverage=1 00:05:44.283 --rc genhtml_legend=1 00:05:44.283 --rc geninfo_all_blocks=1 00:05:44.283 --rc geninfo_unexecuted_blocks=1 00:05:44.283 00:05:44.283 ' 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.283 --rc genhtml_branch_coverage=1 00:05:44.283 --rc genhtml_function_coverage=1 00:05:44.283 --rc genhtml_legend=1 00:05:44.283 --rc geninfo_all_blocks=1 00:05:44.283 --rc geninfo_unexecuted_blocks=1 00:05:44.283 00:05:44.283 ' 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.283 --rc genhtml_branch_coverage=1 00:05:44.283 --rc genhtml_function_coverage=1 00:05:44.283 --rc genhtml_legend=1 00:05:44.283 --rc geninfo_all_blocks=1 00:05:44.283 --rc geninfo_unexecuted_blocks=1 00:05:44.283 00:05:44.283 ' 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.283 --rc genhtml_branch_coverage=1 00:05:44.283 --rc genhtml_function_coverage=1 00:05:44.283 --rc genhtml_legend=1 00:05:44.283 --rc geninfo_all_blocks=1 00:05:44.283 --rc geninfo_unexecuted_blocks=1 00:05:44.283 00:05:44.283 ' 00:05:44.283 15:10:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.283 15:10:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.283 15:10:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3650878 00:05:44.283 15:10:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.283 15:10:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3650878 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3650878 ']' 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.283 15:10:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.283 [2024-11-06 15:10:11.900412] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:44.283 [2024-11-06 15:10:11.900501] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650878 ] 00:05:44.541 [2024-11-06 15:10:12.025030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.541 [2024-11-06 15:10:12.133889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.541 [2024-11-06 15:10:12.133971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.541 [2024-11-06 15:10:12.134034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.541 [2024-11-06 15:10:12.134058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.108 15:10:12 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:45.109 15:10:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.109 [2024-11-06 15:10:12.724453] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:45.109 [2024-11-06 15:10:12.724478] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.109 [2024-11-06 15:10:12.724495] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.109 [2024-11-06 15:10:12.724504] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.109 [2024-11-06 15:10:12.724514] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.109 15:10:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.109 15:10:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 [2024-11-06 15:10:13.039914] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.677 15:10:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.677 15:10:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.677 15:10:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 ************************************ 00:05:45.677 START TEST scheduler_create_thread 00:05:45.677 ************************************ 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 2 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 3 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 4 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 5 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 6 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.677 7 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.677 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 8 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 9 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 10 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.678 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.245 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.245 15:10:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.245 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.245 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.621 15:10:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.621 15:10:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:47.621 15:10:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:47.621 15:10:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.621 15:10:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.559 15:10:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.559 00:05:48.559 real 0m3.105s 00:05:48.559 user 0m0.028s 00:05:48.559 sys 0m0.002s 00:05:48.559 15:10:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.559 15:10:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.559 ************************************ 00:05:48.559 END TEST scheduler_create_thread 00:05:48.559 ************************************ 00:05:48.818 15:10:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.818 15:10:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3650878 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3650878 ']' 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3650878 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3650878 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3650878' 00:05:48.818 killing process with pid 3650878 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3650878 00:05:48.818 15:10:16 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3650878 00:05:49.077 [2024-11-06 15:10:16.561274] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.453 00:05:50.453 real 0m6.067s 00:05:50.453 user 0m12.598s 00:05:50.453 sys 0m0.467s 00:05:50.453 15:10:17 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.453 15:10:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.453 ************************************ 00:05:50.453 END TEST event_scheduler 00:05:50.453 ************************************ 00:05:50.453 15:10:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.453 15:10:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.453 15:10:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.453 15:10:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.453 15:10:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.453 ************************************ 00:05:50.453 START TEST app_repeat 00:05:50.453 ************************************ 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3652071 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3652071' 00:05:50.453 Process app_repeat pid: 3652071 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.453 spdk_app_start Round 0 00:05:50.453 15:10:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3652071 /var/tmp/spdk-nbd.sock 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3652071 ']' 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.453 15:10:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.453 [2024-11-06 15:10:17.847649] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:05:50.453 [2024-11-06 15:10:17.847750] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652071 ] 00:05:50.453 [2024-11-06 15:10:17.968835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.453 [2024-11-06 15:10:18.076961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.453 [2024-11-06 15:10:18.076983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.388 15:10:18 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.388 15:10:18 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:51.388 15:10:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.388 Malloc0 00:05:51.389 15:10:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.647 Malloc1 00:05:51.647 15:10:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.647 15:10:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.906 /dev/nbd0 00:05:51.906 15:10:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.906 15:10:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.906 1+0 records in 00:05:51.906 1+0 records out 00:05:51.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000104189 s, 39.3 MB/s 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.906 15:10:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.906 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.906 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.906 15:10:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.164 /dev/nbd1 00:05:52.164 15:10:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.164 15:10:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.164 15:10:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:52.164 15:10:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:52.164 15:10:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:52.164 15:10:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.165 1+0 records in 00:05:52.165 1+0 records out 00:05:52.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229499 s, 17.8 MB/s 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:52.165 15:10:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:52.165 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.165 15:10:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.165 15:10:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.165 15:10:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.165 15:10:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.424 15:10:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.424 { 00:05:52.424 "nbd_device": "/dev/nbd0", 00:05:52.424 "bdev_name": "Malloc0" 00:05:52.425 }, 00:05:52.425 { 00:05:52.425 "nbd_device": "/dev/nbd1", 00:05:52.425 "bdev_name": "Malloc1" 00:05:52.425 } 00:05:52.425 ]' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.425 { 00:05:52.425 "nbd_device": "/dev/nbd0", 00:05:52.425 "bdev_name": "Malloc0" 00:05:52.425 }, 00:05:52.425 { 00:05:52.425 "nbd_device": "/dev/nbd1", 00:05:52.425 "bdev_name": "Malloc1" 00:05:52.425 } 00:05:52.425 ]' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.425 /dev/nbd1' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.425 /dev/nbd1' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.425 256+0 records in 00:05:52.425 256+0 records out 00:05:52.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106644 s, 98.3 MB/s 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.425 256+0 records in 00:05:52.425 256+0 records out 00:05:52.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160942 s, 65.2 MB/s 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.425 15:10:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.425 256+0 records in 00:05:52.425 256+0 records out 00:05:52.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197198 s, 53.2 MB/s 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.425 15:10:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.683 15:10:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.942 15:10:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.200 15:10:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.200 15:10:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.459 15:10:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.836 [2024-11-06 15:10:22.269815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.836 [2024-11-06 15:10:22.369172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.836 [2024-11-06 15:10:22.369173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.095 [2024-11-06 15:10:22.557359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.095 [2024-11-06 15:10:22.557412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.469 15:10:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.469 15:10:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.469 spdk_app_start Round 1 00:05:56.469 15:10:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3652071 /var/tmp/spdk-nbd.sock 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3652071 ']' 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.469 15:10:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.728 15:10:24 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.728 15:10:24 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:56.728 15:10:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.988 Malloc0 00:05:56.988 15:10:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.247 Malloc1 00:05:57.247 15:10:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.247 15:10:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.505 /dev/nbd0 00:05:57.505 15:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.505 15:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.505 1+0 records in 00:05:57.505 1+0 records out 00:05:57.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168125 s, 24.4 MB/s 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.505 15:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:57.506 15:10:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.506 15:10:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:57.506 15:10:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:57.506 15:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.506 15:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.506 15:10:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.764 /dev/nbd1 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.764 1+0 records in 00:05:57.764 1+0 records out 00:05:57.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237975 s, 17.2 MB/s 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:57.764 15:10:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.764 15:10:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.024 { 00:05:58.024 "nbd_device": "/dev/nbd0", 00:05:58.024 "bdev_name": "Malloc0" 00:05:58.024 }, 00:05:58.024 { 00:05:58.024 "nbd_device": "/dev/nbd1", 00:05:58.024 "bdev_name": "Malloc1" 00:05:58.024 } 00:05:58.024 ]' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.024 { 00:05:58.024 "nbd_device": "/dev/nbd0", 00:05:58.024 "bdev_name": "Malloc0" 00:05:58.024 }, 00:05:58.024 { 00:05:58.024 "nbd_device": "/dev/nbd1", 00:05:58.024 "bdev_name": "Malloc1" 00:05:58.024 } 00:05:58.024 ]' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.024 /dev/nbd1' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.024 /dev/nbd1' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.024 256+0 records in 00:05:58.024 256+0 records out 00:05:58.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106725 s, 98.3 MB/s 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.024 256+0 records in 00:05:58.024 256+0 records out 00:05:58.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160905 s, 65.2 MB/s 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.024 256+0 records in 00:05:58.024 256+0 records out 00:05:58.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196216 s, 53.4 MB/s 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.024 15:10:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.283 15:10:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.541 15:10:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.542 15:10:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.542 15:10:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.542 15:10:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.800 15:10:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.800 15:10:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.059 15:10:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.436 [2024-11-06 15:10:27.836315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.436 [2024-11-06 15:10:27.935752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.436 [2024-11-06 15:10:27.935767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.694 [2024-11-06 15:10:28.123948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.694 [2024-11-06 15:10:28.123992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.071 15:10:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.071 15:10:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.071 spdk_app_start Round 2 00:06:02.071 15:10:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3652071 /var/tmp/spdk-nbd.sock 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3652071 ']' 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.071 15:10:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.329 15:10:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.329 15:10:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:02.329 15:10:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.588 Malloc0 00:06:02.588 15:10:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.847 Malloc1 00:06:02.847 15:10:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.847 15:10:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.105 /dev/nbd0 00:06:03.105 15:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.105 15:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.105 1+0 records in 00:06:03.105 1+0 records out 00:06:03.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230926 s, 17.7 MB/s 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.105 15:10:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.105 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.105 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.105 15:10:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.364 /dev/nbd1 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.364 1+0 records in 00:06:03.364 1+0 records out 00:06:03.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242169 s, 16.9 MB/s 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.364 15:10:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.364 15:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.623 { 00:06:03.623 "nbd_device": "/dev/nbd0", 00:06:03.623 "bdev_name": "Malloc0" 00:06:03.623 }, 00:06:03.623 { 00:06:03.623 "nbd_device": "/dev/nbd1", 00:06:03.623 "bdev_name": "Malloc1" 00:06:03.623 } 00:06:03.623 ]' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.623 { 00:06:03.623 "nbd_device": "/dev/nbd0", 00:06:03.623 "bdev_name": "Malloc0" 00:06:03.623 }, 00:06:03.623 { 00:06:03.623 "nbd_device": "/dev/nbd1", 00:06:03.623 "bdev_name": "Malloc1" 00:06:03.623 } 00:06:03.623 ]' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.623 /dev/nbd1' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.623 /dev/nbd1' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.623 256+0 records in 00:06:03.623 256+0 records out 00:06:03.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107375 s, 97.7 MB/s 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.623 256+0 records in 00:06:03.623 256+0 records out 00:06:03.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163635 s, 64.1 MB/s 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.623 256+0 records in 00:06:03.623 256+0 records out 00:06:03.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192209 s, 54.6 MB/s 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.623 15:10:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.624 15:10:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.882 15:10:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.140 15:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.398 15:10:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.399 15:10:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.657 15:10:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.033 [2024-11-06 15:10:33.453938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.033 [2024-11-06 15:10:33.554295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.033 [2024-11-06 15:10:33.554298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.292 [2024-11-06 15:10:33.749379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.292 [2024-11-06 15:10:33.749423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.668 15:10:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3652071 /var/tmp/spdk-nbd.sock 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3652071 ']' 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.668 15:10:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:07.927 15:10:35 event.app_repeat -- event/event.sh@39 -- # killprocess 3652071 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3652071 ']' 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3652071 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3652071 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3652071' 00:06:07.927 killing process with pid 3652071 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3652071 00:06:07.927 15:10:35 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3652071 00:06:09.320 spdk_app_start is called in Round 0. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 1. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 2. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 3. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 15:10:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.320 15:10:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.320 00:06:09.320 real 0m18.764s 00:06:09.320 user 0m39.823s 00:06:09.320 sys 0m2.675s 00:06:09.320 15:10:36 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.320 15:10:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 END TEST app_repeat 00:06:09.320 ************************************ 00:06:09.320 15:10:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.320 15:10:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.320 15:10:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.320 15:10:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.320 15:10:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 START TEST cpu_locks 00:06:09.320 ************************************ 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.320 * Looking for test storage... 00:06:09.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.320 15:10:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.320 15:10:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.320 --rc genhtml_branch_coverage=1 00:06:09.320 --rc genhtml_function_coverage=1 00:06:09.320 --rc genhtml_legend=1 00:06:09.320 --rc geninfo_all_blocks=1 00:06:09.320 --rc geninfo_unexecuted_blocks=1 00:06:09.320 00:06:09.320 ' 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.321 --rc genhtml_branch_coverage=1 00:06:09.321 --rc genhtml_function_coverage=1 00:06:09.321 --rc genhtml_legend=1 00:06:09.321 --rc geninfo_all_blocks=1 00:06:09.321 --rc geninfo_unexecuted_blocks=1 00:06:09.321 00:06:09.321 ' 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.321 --rc genhtml_branch_coverage=1 00:06:09.321 --rc genhtml_function_coverage=1 00:06:09.321 --rc genhtml_legend=1 00:06:09.321 --rc geninfo_all_blocks=1 00:06:09.321 --rc geninfo_unexecuted_blocks=1 00:06:09.321 00:06:09.321 ' 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.321 --rc genhtml_branch_coverage=1 00:06:09.321 --rc genhtml_function_coverage=1 00:06:09.321 --rc genhtml_legend=1 00:06:09.321 --rc geninfo_all_blocks=1 00:06:09.321 --rc geninfo_unexecuted_blocks=1 00:06:09.321 00:06:09.321 ' 00:06:09.321 15:10:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.321 15:10:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.321 15:10:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.321 15:10:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.321 15:10:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.321 ************************************ 00:06:09.321 START TEST default_locks 00:06:09.321 ************************************ 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3655441 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3655441 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3655441 ']' 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.321 15:10:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.321 [2024-11-06 15:10:36.925103] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:09.321 [2024-11-06 15:10:36.925193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655441 ] 00:06:09.580 [2024-11-06 15:10:37.049149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.580 [2024-11-06 15:10:37.154839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.515 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.515 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:10.515 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3655441 00:06:10.515 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3655441 00:06:10.515 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.774 lslocks: write error 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3655441 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3655441 ']' 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3655441 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3655441 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3655441' 00:06:10.774 killing process with pid 3655441 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3655441 00:06:10.774 15:10:38 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3655441 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3655441 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3655441 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3655441 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3655441 ']' 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3655441) - No such process 00:06:13.308 ERROR: process (pid: 3655441) is no longer running 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.308 00:06:13.308 real 0m3.830s 00:06:13.308 user 0m3.811s 00:06:13.308 sys 0m0.629s 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.308 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.308 ************************************ 00:06:13.308 END TEST default_locks 00:06:13.308 ************************************ 00:06:13.308 15:10:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.308 15:10:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.308 15:10:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.308 15:10:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.308 ************************************ 00:06:13.308 START TEST default_locks_via_rpc 00:06:13.308 ************************************ 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3656034 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3656034 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3656034 ']' 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.308 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.308 [2024-11-06 15:10:40.812632] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:13.308 [2024-11-06 15:10:40.812721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656034 ] 00:06:13.308 [2024-11-06 15:10:40.932988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.568 [2024-11-06 15:10:41.039287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3656034 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3656034 00:06:14.503 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3656034 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3656034 ']' 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3656034 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3656034 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3656034' 00:06:14.762 killing process with pid 3656034 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3656034 00:06:14.762 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3656034 00:06:17.294 00:06:17.294 real 0m3.847s 00:06:17.294 user 0m3.835s 00:06:17.294 sys 0m0.649s 00:06:17.294 15:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.294 15:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.294 ************************************ 00:06:17.294 END TEST default_locks_via_rpc 00:06:17.294 ************************************ 00:06:17.294 15:10:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.294 15:10:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.294 15:10:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.294 15:10:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.294 ************************************ 00:06:17.294 START TEST non_locking_app_on_locked_coremask 00:06:17.294 ************************************ 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3656750 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3656750 /var/tmp/spdk.sock 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3656750 ']' 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.294 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.294 [2024-11-06 15:10:44.738637] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:17.294 [2024-11-06 15:10:44.738728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656750 ] 00:06:17.294 [2024-11-06 15:10:44.862832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.553 [2024-11-06 15:10:44.969649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3656980 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3656980 /var/tmp/spdk2.sock 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3656980 ']' 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.490 15:10:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.490 [2024-11-06 15:10:45.877361] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:18.490 [2024-11-06 15:10:45.877470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656980 ] 00:06:18.490 [2024-11-06 15:10:46.032123] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.490 [2024-11-06 15:10:46.032183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.749 [2024-11-06 15:10:46.241856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3656750 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3656750 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.282 lslocks: write error 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3656750 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3656750 ']' 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3656750 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.282 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3656750 00:06:21.541 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.541 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.541 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3656750' 00:06:21.541 killing process with pid 3656750 00:06:21.541 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3656750 00:06:21.541 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3656750 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3656980 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3656980 ']' 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3656980 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3656980 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3656980' 00:06:26.813 killing process with pid 3656980 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3656980 00:06:26.813 15:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3656980 00:06:28.191 00:06:28.191 real 0m11.166s 00:06:28.191 user 0m11.434s 00:06:28.191 sys 0m1.218s 00:06:28.191 15:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.191 15:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.191 ************************************ 00:06:28.191 END TEST non_locking_app_on_locked_coremask 00:06:28.191 ************************************ 00:06:28.450 15:10:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.450 15:10:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.450 15:10:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.450 15:10:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.450 ************************************ 00:06:28.450 START TEST locking_app_on_unlocked_coremask 00:06:28.450 ************************************ 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3658630 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3658630 /var/tmp/spdk.sock 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3658630 ']' 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.450 15:10:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.450 [2024-11-06 15:10:55.975908] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:28.450 [2024-11-06 15:10:55.975998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658630 ] 00:06:28.709 [2024-11-06 15:10:56.098528] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.709 [2024-11-06 15:10:56.098576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.709 [2024-11-06 15:10:56.202276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3658862 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3658862 /var/tmp/spdk2.sock 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3658862 ']' 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.646 15:10:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.646 [2024-11-06 15:10:57.120118] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:29.646 [2024-11-06 15:10:57.120233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658862 ] 00:06:29.646 [2024-11-06 15:10:57.275913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.905 [2024-11-06 15:10:57.491501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.437 15:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.437 15:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:32.437 15:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3658862 00:06:32.437 15:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3658862 00:06:32.437 15:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.694 lslocks: write error 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3658630 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3658630 ']' 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3658630 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658630 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658630' 00:06:32.694 killing process with pid 3658630 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3658630 00:06:32.694 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3658630 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3658862 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3658862 ']' 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3658862 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658862 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658862' 00:06:37.962 killing process with pid 3658862 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3658862 00:06:37.962 15:11:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3658862 00:06:39.868 00:06:39.868 real 0m11.227s 00:06:39.868 user 0m11.434s 00:06:39.868 sys 0m1.251s 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.868 ************************************ 00:06:39.868 END TEST locking_app_on_unlocked_coremask 00:06:39.868 ************************************ 00:06:39.868 15:11:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.868 15:11:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.868 15:11:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.868 15:11:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.868 ************************************ 00:06:39.868 START TEST locking_app_on_locked_coremask 00:06:39.868 ************************************ 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3660632 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3660632 /var/tmp/spdk.sock 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3660632 ']' 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:39.868 15:11:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.868 [2024-11-06 15:11:07.275042] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:39.868 [2024-11-06 15:11:07.275134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660632 ] 00:06:39.868 [2024-11-06 15:11:07.401254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.127 [2024-11-06 15:11:07.505296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3660749 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3660749 /var/tmp/spdk2.sock 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3660749 /var/tmp/spdk2.sock 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3660749 /var/tmp/spdk2.sock 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3660749 ']' 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.695 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.954 [2024-11-06 15:11:08.370490] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:40.954 [2024-11-06 15:11:08.370579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660749 ] 00:06:40.954 [2024-11-06 15:11:08.529350] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3660632 has claimed it. 00:06:40.954 [2024-11-06 15:11:08.529411] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3660749) - No such process 00:06:41.522 ERROR: process (pid: 3660749) is no longer running 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3660632 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3660632 00:06:41.522 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.781 lslocks: write error 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3660632 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3660632 ']' 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3660632 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3660632 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3660632' 00:06:41.781 killing process with pid 3660632 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3660632 00:06:41.781 15:11:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3660632 00:06:44.318 00:06:44.318 real 0m4.380s 00:06:44.318 user 0m4.499s 00:06:44.318 sys 0m0.763s 00:06:44.318 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.319 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.319 ************************************ 00:06:44.319 END TEST locking_app_on_locked_coremask 00:06:44.319 ************************************ 00:06:44.319 15:11:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.319 15:11:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.319 15:11:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.319 15:11:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.319 ************************************ 00:06:44.319 START TEST locking_overlapped_coremask 00:06:44.319 ************************************ 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3661455 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3661455 /var/tmp/spdk.sock 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3661455 ']' 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.319 15:11:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.319 [2024-11-06 15:11:11.720890] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:44.319 [2024-11-06 15:11:11.720979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661455 ] 00:06:44.319 [2024-11-06 15:11:11.845713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.577 [2024-11-06 15:11:11.961991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.577 [2024-11-06 15:11:11.962087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.577 [2024-11-06 15:11:11.962105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3661640 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3661640 /var/tmp/spdk2.sock 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3661640 /var/tmp/spdk2.sock 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3661640 /var/tmp/spdk2.sock 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3661640 ']' 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.516 15:11:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.516 [2024-11-06 15:11:12.886987] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:45.516 [2024-11-06 15:11:12.887133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661640 ] 00:06:45.516 [2024-11-06 15:11:13.047662] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3661455 has claimed it. 00:06:45.516 [2024-11-06 15:11:13.047720] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3661640) - No such process 00:06:46.084 ERROR: process (pid: 3661640) is no longer running 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3661455 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3661455 ']' 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3661455 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3661455 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3661455' 00:06:46.084 killing process with pid 3661455 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3661455 00:06:46.084 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3661455 00:06:48.621 00:06:48.621 real 0m4.290s 00:06:48.621 user 0m11.814s 00:06:48.621 sys 0m0.632s 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.621 ************************************ 00:06:48.621 END TEST locking_overlapped_coremask 00:06:48.621 ************************************ 00:06:48.621 15:11:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.621 15:11:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.621 15:11:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.621 15:11:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.621 ************************************ 00:06:48.621 START TEST locking_overlapped_coremask_via_rpc 00:06:48.621 ************************************ 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3662194 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3662194 /var/tmp/spdk.sock 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3662194 ']' 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.621 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.622 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.622 15:11:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.622 [2024-11-06 15:11:16.081216] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:48.622 [2024-11-06 15:11:16.081325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662194 ] 00:06:48.622 [2024-11-06 15:11:16.203259] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.622 [2024-11-06 15:11:16.203304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.881 [2024-11-06 15:11:16.311564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.881 [2024-11-06 15:11:16.311635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.881 [2024-11-06 15:11:16.311657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3662426 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3662426 /var/tmp/spdk2.sock 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3662426 ']' 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.818 15:11:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.818 [2024-11-06 15:11:17.252455] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:49.818 [2024-11-06 15:11:17.252567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662426 ] 00:06:49.818 [2024-11-06 15:11:17.408072] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.818 [2024-11-06 15:11:17.408128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.078 [2024-11-06 15:11:17.637614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.078 [2024-11-06 15:11:17.637700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.078 [2024-11-06 15:11:17.637726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 [2024-11-06 15:11:19.772335] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3662194 has claimed it. 00:06:52.613 request: 00:06:52.613 { 00:06:52.613 "method": "framework_enable_cpumask_locks", 00:06:52.613 "req_id": 1 00:06:52.613 } 00:06:52.613 Got JSON-RPC error response 00:06:52.613 response: 00:06:52.613 { 00:06:52.613 "code": -32603, 00:06:52.613 "message": "Failed to claim CPU core: 2" 00:06:52.613 } 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3662194 /var/tmp/spdk.sock 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3662194 ']' 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3662426 /var/tmp/spdk2.sock 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3662426 ']' 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.613 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.613 00:06:52.613 real 0m4.194s 00:06:52.613 user 0m1.145s 00:06:52.613 sys 0m0.192s 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.613 15:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.613 ************************************ 00:06:52.613 END TEST locking_overlapped_coremask_via_rpc 00:06:52.613 ************************************ 00:06:52.613 15:11:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.613 15:11:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3662194 ]] 00:06:52.613 15:11:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3662194 00:06:52.613 15:11:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3662194 ']' 00:06:52.614 15:11:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3662194 00:06:52.614 15:11:20 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:52.614 15:11:20 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.614 15:11:20 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3662194 00:06:52.873 15:11:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.873 15:11:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.873 15:11:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3662194' 00:06:52.873 killing process with pid 3662194 00:06:52.873 15:11:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3662194 00:06:52.873 15:11:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3662194 00:06:55.410 15:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3662426 ]] 00:06:55.410 15:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3662426 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3662426 ']' 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3662426 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3662426 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3662426' 00:06:55.410 killing process with pid 3662426 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3662426 00:06:55.410 15:11:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3662426 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3662194 ]] 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3662194 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3662194 ']' 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3662194 00:06:57.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3662194) - No such process 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3662194 is not found' 00:06:57.946 Process with pid 3662194 is not found 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3662426 ]] 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3662426 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3662426 ']' 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3662426 00:06:57.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3662426) - No such process 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3662426 is not found' 00:06:57.946 Process with pid 3662426 is not found 00:06:57.946 15:11:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.946 00:06:57.946 real 0m48.586s 00:06:57.946 user 1m24.081s 00:06:57.946 sys 0m6.503s 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.946 15:11:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.946 ************************************ 00:06:57.946 END TEST cpu_locks 00:06:57.946 ************************************ 00:06:57.946 00:06:57.946 real 1m18.462s 00:06:57.946 user 2m23.814s 00:06:57.946 sys 0m10.400s 00:06:57.946 15:11:25 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.946 15:11:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.946 ************************************ 00:06:57.946 END TEST event 00:06:57.946 ************************************ 00:06:57.946 15:11:25 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.946 15:11:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.946 15:11:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.946 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:06:57.946 ************************************ 00:06:57.946 START TEST thread 00:06:57.946 ************************************ 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.946 * Looking for test storage... 00:06:57.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.946 15:11:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.946 15:11:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.946 15:11:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.946 15:11:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.946 15:11:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.946 15:11:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.946 15:11:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.946 15:11:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.946 15:11:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.946 15:11:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.946 15:11:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.946 15:11:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.946 15:11:25 thread -- scripts/common.sh@345 -- # : 1 00:06:57.946 15:11:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.946 15:11:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.946 15:11:25 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.946 15:11:25 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.946 15:11:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.946 15:11:25 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.946 15:11:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.946 15:11:25 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.946 15:11:25 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.946 15:11:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.946 15:11:25 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.946 15:11:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.946 15:11:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.946 15:11:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.946 15:11:25 thread -- scripts/common.sh@368 -- # return 0 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.946 15:11:25 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.946 --rc genhtml_branch_coverage=1 00:06:57.946 --rc genhtml_function_coverage=1 00:06:57.946 --rc genhtml_legend=1 00:06:57.946 --rc geninfo_all_blocks=1 00:06:57.946 --rc geninfo_unexecuted_blocks=1 00:06:57.946 00:06:57.947 ' 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.947 --rc genhtml_branch_coverage=1 00:06:57.947 --rc genhtml_function_coverage=1 00:06:57.947 --rc genhtml_legend=1 00:06:57.947 --rc geninfo_all_blocks=1 00:06:57.947 --rc geninfo_unexecuted_blocks=1 00:06:57.947 00:06:57.947 ' 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.947 --rc genhtml_branch_coverage=1 00:06:57.947 --rc genhtml_function_coverage=1 00:06:57.947 --rc genhtml_legend=1 00:06:57.947 --rc geninfo_all_blocks=1 00:06:57.947 --rc geninfo_unexecuted_blocks=1 00:06:57.947 00:06:57.947 ' 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.947 --rc genhtml_branch_coverage=1 00:06:57.947 --rc genhtml_function_coverage=1 00:06:57.947 --rc genhtml_legend=1 00:06:57.947 --rc geninfo_all_blocks=1 00:06:57.947 --rc geninfo_unexecuted_blocks=1 00:06:57.947 00:06:57.947 ' 00:06:57.947 15:11:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.947 15:11:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.947 ************************************ 00:06:57.947 START TEST thread_poller_perf 00:06:57.947 ************************************ 00:06:57.947 15:11:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.947 [2024-11-06 15:11:25.567249] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:57.947 [2024-11-06 15:11:25.567328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663915 ] 00:06:58.207 [2024-11-06 15:11:25.689894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.207 [2024-11-06 15:11:25.793696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.207 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.601 [2024-11-06T14:11:27.239Z] ====================================== 00:06:59.601 [2024-11-06T14:11:27.239Z] busy:2109350420 (cyc) 00:06:59.601 [2024-11-06T14:11:27.239Z] total_run_count: 403000 00:06:59.601 [2024-11-06T14:11:27.239Z] tsc_hz: 2100000000 (cyc) 00:06:59.601 [2024-11-06T14:11:27.239Z] ====================================== 00:06:59.601 [2024-11-06T14:11:27.239Z] poller_cost: 5234 (cyc), 2492 (nsec) 00:06:59.601 00:06:59.601 real 0m1.484s 00:06:59.601 user 0m1.350s 00:06:59.601 sys 0m0.129s 00:06:59.601 15:11:27 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.601 15:11:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.601 ************************************ 00:06:59.601 END TEST thread_poller_perf 00:06:59.601 ************************************ 00:06:59.601 15:11:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.601 15:11:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:59.601 15:11:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.601 15:11:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.601 ************************************ 00:06:59.601 START TEST thread_poller_perf 00:06:59.601 ************************************ 00:06:59.601 15:11:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.601 [2024-11-06 15:11:27.125608] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:59.601 [2024-11-06 15:11:27.125686] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664166 ] 00:06:59.887 [2024-11-06 15:11:27.253169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.887 [2024-11-06 15:11:27.355456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.887 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.355 [2024-11-06T14:11:28.993Z] ====================================== 00:07:01.356 [2024-11-06T14:11:28.994Z] busy:2102339080 (cyc) 00:07:01.356 [2024-11-06T14:11:28.994Z] total_run_count: 5169000 00:07:01.356 [2024-11-06T14:11:28.994Z] tsc_hz: 2100000000 (cyc) 00:07:01.356 [2024-11-06T14:11:28.994Z] ====================================== 00:07:01.356 [2024-11-06T14:11:28.994Z] poller_cost: 406 (cyc), 193 (nsec) 00:07:01.356 00:07:01.356 real 0m1.486s 00:07:01.356 user 0m1.349s 00:07:01.356 sys 0m0.131s 00:07:01.356 15:11:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.356 15:11:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.356 ************************************ 00:07:01.356 END TEST thread_poller_perf 00:07:01.356 ************************************ 00:07:01.356 15:11:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.356 00:07:01.356 real 0m3.288s 00:07:01.356 user 0m2.860s 00:07:01.356 sys 0m0.439s 00:07:01.356 15:11:28 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.356 15:11:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.356 ************************************ 00:07:01.356 END TEST thread 00:07:01.356 ************************************ 00:07:01.356 15:11:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.356 15:11:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.356 15:11:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.356 15:11:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.356 15:11:28 -- common/autotest_common.sh@10 -- # set +x 00:07:01.356 ************************************ 00:07:01.356 START TEST app_cmdline 00:07:01.356 ************************************ 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.356 * Looking for test storage... 00:07:01.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.356 15:11:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.356 --rc genhtml_branch_coverage=1 00:07:01.356 --rc genhtml_function_coverage=1 00:07:01.356 --rc genhtml_legend=1 00:07:01.356 --rc geninfo_all_blocks=1 00:07:01.356 --rc geninfo_unexecuted_blocks=1 00:07:01.356 00:07:01.356 ' 00:07:01.356 15:11:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.356 15:11:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.356 15:11:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3664479 00:07:01.356 15:11:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3664479 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3664479 ']' 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.356 15:11:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.356 [2024-11-06 15:11:28.922484] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:01.356 [2024-11-06 15:11:28.922575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664479 ] 00:07:01.615 [2024-11-06 15:11:29.045435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.615 [2024-11-06 15:11:29.150245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.550 15:11:29 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.550 15:11:29 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:02.550 15:11:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.550 { 00:07:02.550 "version": "SPDK v25.01-pre git sha1 d1c46ed8e", 00:07:02.550 "fields": { 00:07:02.550 "major": 25, 00:07:02.550 "minor": 1, 00:07:02.550 "patch": 0, 00:07:02.550 "suffix": "-pre", 00:07:02.550 "commit": "d1c46ed8e" 00:07:02.550 } 00:07:02.550 } 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.550 15:11:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.550 15:11:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.550 15:11:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.550 15:11:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.808 15:11:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.808 15:11:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.808 15:11:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.808 request: 00:07:02.808 { 00:07:02.808 "method": "env_dpdk_get_mem_stats", 00:07:02.808 "req_id": 1 00:07:02.808 } 00:07:02.808 Got JSON-RPC error response 00:07:02.808 response: 00:07:02.808 { 00:07:02.808 "code": -32601, 00:07:02.808 "message": "Method not found" 00:07:02.808 } 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.808 15:11:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3664479 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3664479 ']' 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3664479 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.808 15:11:30 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3664479 00:07:03.067 15:11:30 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.067 15:11:30 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.067 15:11:30 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3664479' 00:07:03.067 killing process with pid 3664479 00:07:03.067 15:11:30 app_cmdline -- common/autotest_common.sh@971 -- # kill 3664479 00:07:03.067 15:11:30 app_cmdline -- common/autotest_common.sh@976 -- # wait 3664479 00:07:05.599 00:07:05.599 real 0m4.046s 00:07:05.599 user 0m4.298s 00:07:05.599 sys 0m0.592s 00:07:05.599 15:11:32 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.599 15:11:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.599 ************************************ 00:07:05.599 END TEST app_cmdline 00:07:05.599 ************************************ 00:07:05.599 15:11:32 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.599 15:11:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.599 15:11:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.599 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.599 ************************************ 00:07:05.599 START TEST version 00:07:05.599 ************************************ 00:07:05.599 15:11:32 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.599 * Looking for test storage... 00:07:05.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.599 15:11:32 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.599 15:11:32 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.599 15:11:32 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.599 15:11:32 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.599 15:11:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.599 15:11:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.599 15:11:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.599 15:11:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.599 15:11:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.599 15:11:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.599 15:11:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.600 15:11:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.600 15:11:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.600 15:11:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.600 15:11:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.600 15:11:32 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.600 15:11:32 version -- scripts/common.sh@345 -- # : 1 00:07:05.600 15:11:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.600 15:11:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.600 15:11:32 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.600 15:11:32 version -- scripts/common.sh@353 -- # local d=1 00:07:05.600 15:11:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.600 15:11:32 version -- scripts/common.sh@355 -- # echo 1 00:07:05.600 15:11:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.600 15:11:32 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.600 15:11:32 version -- scripts/common.sh@353 -- # local d=2 00:07:05.600 15:11:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.600 15:11:32 version -- scripts/common.sh@355 -- # echo 2 00:07:05.600 15:11:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.600 15:11:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.600 15:11:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.600 15:11:32 version -- scripts/common.sh@368 -- # return 0 00:07:05.600 15:11:32 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.600 15:11:32 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.600 --rc genhtml_branch_coverage=1 00:07:05.600 --rc genhtml_function_coverage=1 00:07:05.600 --rc genhtml_legend=1 00:07:05.600 --rc geninfo_all_blocks=1 00:07:05.600 --rc geninfo_unexecuted_blocks=1 00:07:05.600 00:07:05.600 ' 00:07:05.600 15:11:32 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.600 --rc genhtml_branch_coverage=1 00:07:05.600 --rc genhtml_function_coverage=1 00:07:05.600 --rc genhtml_legend=1 00:07:05.600 --rc geninfo_all_blocks=1 00:07:05.600 --rc geninfo_unexecuted_blocks=1 00:07:05.600 00:07:05.600 ' 00:07:05.600 15:11:32 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.600 --rc genhtml_branch_coverage=1 00:07:05.600 --rc genhtml_function_coverage=1 00:07:05.600 --rc genhtml_legend=1 00:07:05.600 --rc geninfo_all_blocks=1 00:07:05.600 --rc geninfo_unexecuted_blocks=1 00:07:05.600 00:07:05.600 ' 00:07:05.600 15:11:32 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.600 --rc genhtml_branch_coverage=1 00:07:05.600 --rc genhtml_function_coverage=1 00:07:05.600 --rc genhtml_legend=1 00:07:05.600 --rc geninfo_all_blocks=1 00:07:05.600 --rc geninfo_unexecuted_blocks=1 00:07:05.600 00:07:05.600 ' 00:07:05.600 15:11:32 version -- app/version.sh@17 -- # get_header_version major 00:07:05.600 15:11:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.600 15:11:32 version -- app/version.sh@17 -- # major=25 00:07:05.600 15:11:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.600 15:11:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.600 15:11:32 version -- app/version.sh@18 -- # minor=1 00:07:05.600 15:11:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.600 15:11:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.600 15:11:32 version -- app/version.sh@19 -- # patch=0 00:07:05.600 15:11:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.600 15:11:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.600 15:11:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.600 15:11:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.600 15:11:32 version -- app/version.sh@22 -- # version=25.1 00:07:05.600 15:11:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.600 15:11:32 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.600 15:11:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.600 15:11:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.600 15:11:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:05.600 15:11:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:05.600 00:07:05.600 real 0m0.241s 00:07:05.600 user 0m0.145s 00:07:05.600 sys 0m0.139s 00:07:05.600 15:11:33 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.600 15:11:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.600 ************************************ 00:07:05.600 END TEST version 00:07:05.600 ************************************ 00:07:05.600 15:11:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:05.600 15:11:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:05.600 15:11:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:05.600 15:11:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.600 15:11:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.600 15:11:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:05.600 15:11:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.600 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:05.600 15:11:33 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:05.600 15:11:33 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:05.600 15:11:33 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.600 15:11:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:05.600 15:11:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.600 15:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:05.600 ************************************ 00:07:05.600 START TEST nvmf_tcp 00:07:05.600 ************************************ 00:07:05.600 15:11:33 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.600 * Looking for test storage... 00:07:05.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.600 15:11:33 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.600 15:11:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.600 15:11:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.859 15:11:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.859 15:11:33 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.860 15:11:33 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.860 --rc genhtml_branch_coverage=1 00:07:05.860 --rc genhtml_function_coverage=1 00:07:05.860 --rc genhtml_legend=1 00:07:05.860 --rc geninfo_all_blocks=1 00:07:05.860 --rc geninfo_unexecuted_blocks=1 00:07:05.860 00:07:05.860 ' 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.860 --rc genhtml_branch_coverage=1 00:07:05.860 --rc genhtml_function_coverage=1 00:07:05.860 --rc genhtml_legend=1 00:07:05.860 --rc geninfo_all_blocks=1 00:07:05.860 --rc geninfo_unexecuted_blocks=1 00:07:05.860 00:07:05.860 ' 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.860 --rc genhtml_branch_coverage=1 00:07:05.860 --rc genhtml_function_coverage=1 00:07:05.860 --rc genhtml_legend=1 00:07:05.860 --rc geninfo_all_blocks=1 00:07:05.860 --rc geninfo_unexecuted_blocks=1 00:07:05.860 00:07:05.860 ' 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.860 --rc genhtml_branch_coverage=1 00:07:05.860 --rc genhtml_function_coverage=1 00:07:05.860 --rc genhtml_legend=1 00:07:05.860 --rc geninfo_all_blocks=1 00:07:05.860 --rc geninfo_unexecuted_blocks=1 00:07:05.860 00:07:05.860 ' 00:07:05.860 15:11:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.860 15:11:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.860 15:11:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.860 15:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.860 ************************************ 00:07:05.860 START TEST nvmf_target_core 00:07:05.860 ************************************ 00:07:05.860 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:05.860 * Looking for test storage... 00:07:05.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.860 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.860 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.860 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.120 --rc genhtml_branch_coverage=1 00:07:06.120 --rc genhtml_function_coverage=1 00:07:06.120 --rc genhtml_legend=1 00:07:06.120 --rc geninfo_all_blocks=1 00:07:06.120 --rc geninfo_unexecuted_blocks=1 00:07:06.120 00:07:06.120 ' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.120 --rc genhtml_branch_coverage=1 00:07:06.120 --rc genhtml_function_coverage=1 00:07:06.120 --rc genhtml_legend=1 00:07:06.120 --rc geninfo_all_blocks=1 00:07:06.120 --rc geninfo_unexecuted_blocks=1 00:07:06.120 00:07:06.120 ' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.120 --rc genhtml_branch_coverage=1 00:07:06.120 --rc genhtml_function_coverage=1 00:07:06.120 --rc genhtml_legend=1 00:07:06.120 --rc geninfo_all_blocks=1 00:07:06.120 --rc geninfo_unexecuted_blocks=1 00:07:06.120 00:07:06.120 ' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.120 --rc genhtml_branch_coverage=1 00:07:06.120 --rc genhtml_function_coverage=1 00:07:06.120 --rc genhtml_legend=1 00:07:06.120 --rc geninfo_all_blocks=1 00:07:06.120 --rc geninfo_unexecuted_blocks=1 00:07:06.120 00:07:06.120 ' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.120 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.121 ************************************ 00:07:06.121 START TEST nvmf_abort 00:07:06.121 ************************************ 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:06.121 * Looking for test storage... 00:07:06.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.121 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.381 --rc genhtml_branch_coverage=1 00:07:06.381 --rc genhtml_function_coverage=1 00:07:06.381 --rc genhtml_legend=1 00:07:06.381 --rc geninfo_all_blocks=1 00:07:06.381 --rc geninfo_unexecuted_blocks=1 00:07:06.381 00:07:06.381 ' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.381 --rc genhtml_branch_coverage=1 00:07:06.381 --rc genhtml_function_coverage=1 00:07:06.381 --rc genhtml_legend=1 00:07:06.381 --rc geninfo_all_blocks=1 00:07:06.381 --rc geninfo_unexecuted_blocks=1 00:07:06.381 00:07:06.381 ' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.381 --rc genhtml_branch_coverage=1 00:07:06.381 --rc genhtml_function_coverage=1 00:07:06.381 --rc genhtml_legend=1 00:07:06.381 --rc geninfo_all_blocks=1 00:07:06.381 --rc geninfo_unexecuted_blocks=1 00:07:06.381 00:07:06.381 ' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.381 --rc genhtml_branch_coverage=1 00:07:06.381 --rc genhtml_function_coverage=1 00:07:06.381 --rc genhtml_legend=1 00:07:06.381 --rc geninfo_all_blocks=1 00:07:06.381 --rc geninfo_unexecuted_blocks=1 00:07:06.381 00:07:06.381 ' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.381 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.382 15:11:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:12.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:12.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:12.950 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:12.951 Found net devices under 0000:86:00.0: cvl_0_0 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:12.951 Found net devices under 0000:86:00.1: cvl_0_1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:12.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:07:12.951 00:07:12.951 --- 10.0.0.2 ping statistics --- 00:07:12.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.951 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:12.951 00:07:12.951 --- 10.0.0.1 ping statistics --- 00:07:12.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.951 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3668620 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3668620 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3668620 ']' 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.951 15:11:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.951 [2024-11-06 15:11:39.866361] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:12.951 [2024-11-06 15:11:39.866458] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.951 [2024-11-06 15:11:39.997491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.951 [2024-11-06 15:11:40.118305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.951 [2024-11-06 15:11:40.118356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.951 [2024-11-06 15:11:40.118367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.951 [2024-11-06 15:11:40.118378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.951 [2024-11-06 15:11:40.118386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.951 [2024-11-06 15:11:40.120785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.951 [2024-11-06 15:11:40.120843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.951 [2024-11-06 15:11:40.120865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.211 [2024-11-06 15:11:40.726253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.211 Malloc0 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.211 Delay0 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.211 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.470 [2024-11-06 15:11:40.866561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.470 15:11:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:13.470 [2024-11-06 15:11:40.994310] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:16.003 Initializing NVMe Controllers 00:07:16.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:16.003 controller IO queue size 128 less than required 00:07:16.003 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:16.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:16.003 Initialization complete. Launching workers. 00:07:16.003 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34271 00:07:16.003 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34328, failed to submit 66 00:07:16.003 success 34271, unsuccessful 57, failed 0 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.003 rmmod nvme_tcp 00:07:16.003 rmmod nvme_fabrics 00:07:16.003 rmmod nvme_keyring 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3668620 ']' 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3668620 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3668620 ']' 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3668620 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3668620 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3668620' 00:07:16.003 killing process with pid 3668620 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3668620 00:07:16.003 15:11:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3668620 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.941 15:11:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:19.476 00:07:19.476 real 0m13.006s 00:07:19.476 user 0m16.329s 00:07:19.476 sys 0m5.431s 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.476 ************************************ 00:07:19.476 END TEST nvmf_abort 00:07:19.476 ************************************ 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.476 15:11:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.476 ************************************ 00:07:19.476 START TEST nvmf_ns_hotplug_stress 00:07:19.476 ************************************ 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:19.477 * Looking for test storage... 00:07:19.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.477 --rc genhtml_branch_coverage=1 00:07:19.477 --rc genhtml_function_coverage=1 00:07:19.477 --rc genhtml_legend=1 00:07:19.477 --rc geninfo_all_blocks=1 00:07:19.477 --rc geninfo_unexecuted_blocks=1 00:07:19.477 00:07:19.477 ' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.477 --rc genhtml_branch_coverage=1 00:07:19.477 --rc genhtml_function_coverage=1 00:07:19.477 --rc genhtml_legend=1 00:07:19.477 --rc geninfo_all_blocks=1 00:07:19.477 --rc geninfo_unexecuted_blocks=1 00:07:19.477 00:07:19.477 ' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.477 --rc genhtml_branch_coverage=1 00:07:19.477 --rc genhtml_function_coverage=1 00:07:19.477 --rc genhtml_legend=1 00:07:19.477 --rc geninfo_all_blocks=1 00:07:19.477 --rc geninfo_unexecuted_blocks=1 00:07:19.477 00:07:19.477 ' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.477 --rc genhtml_branch_coverage=1 00:07:19.477 --rc genhtml_function_coverage=1 00:07:19.477 --rc genhtml_legend=1 00:07:19.477 --rc geninfo_all_blocks=1 00:07:19.477 --rc geninfo_unexecuted_blocks=1 00:07:19.477 00:07:19.477 ' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.477 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.478 15:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.042 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:26.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:26.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:26.043 Found net devices under 0000:86:00.0: cvl_0_0 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:26.043 Found net devices under 0000:86:00.1: cvl_0_1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:07:26.043 00:07:26.043 --- 10.0.0.2 ping statistics --- 00:07:26.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.043 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:26.043 00:07:26.043 --- 10.0.0.1 ping statistics --- 00:07:26.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.043 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:26.043 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3672880 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3672880 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3672880 ']' 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.044 15:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.044 [2024-11-06 15:11:52.942187] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:26.044 [2024-11-06 15:11:52.942302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.044 [2024-11-06 15:11:53.080799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.044 [2024-11-06 15:11:53.188434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.044 [2024-11-06 15:11:53.188480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.044 [2024-11-06 15:11:53.188491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.044 [2024-11-06 15:11:53.188501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.044 [2024-11-06 15:11:53.188509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.044 [2024-11-06 15:11:53.190830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.044 [2024-11-06 15:11:53.190900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.044 [2024-11-06 15:11:53.190920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:26.303 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:26.561 [2024-11-06 15:11:53.951733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.561 15:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:26.561 15:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.819 [2024-11-06 15:11:54.346793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.819 15:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.078 15:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:27.336 Malloc0 00:07:27.336 15:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.594 Delay0 00:07:27.594 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.594 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:27.852 NULL1 00:07:27.852 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:28.110 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:28.110 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3673368 00:07:28.110 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:28.110 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.368 15:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.627 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:28.627 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:28.627 true 00:07:28.627 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:28.627 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.885 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.143 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.143 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:29.401 true 00:07:29.401 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:29.401 15:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.659 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.917 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:29.917 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:29.917 true 00:07:29.917 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:29.917 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.175 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.434 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:30.434 15:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:30.692 true 00:07:30.692 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:30.692 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.950 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.951 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:30.951 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:31.209 true 00:07:31.209 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:31.209 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.467 15:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.726 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:31.726 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:31.726 true 00:07:31.985 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:31.985 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.985 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.243 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.243 15:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:32.501 true 00:07:32.501 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:32.501 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.759 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.017 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:33.017 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:33.017 true 00:07:33.017 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:33.017 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.275 15:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.533 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:33.533 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:33.792 true 00:07:33.792 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:33.792 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.050 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.309 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:34.309 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:34.309 true 00:07:34.569 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:34.569 15:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.569 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.828 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:34.828 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:35.086 true 00:07:35.086 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:35.086 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.345 15:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.603 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:35.603 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:35.603 true 00:07:35.603 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:35.603 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.861 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.120 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:36.120 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:36.379 true 00:07:36.379 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:36.379 15:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.652 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.652 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:36.652 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:36.954 true 00:07:36.954 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:36.954 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.212 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.471 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:37.471 15:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:37.471 true 00:07:37.471 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:37.471 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.730 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.988 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:37.988 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:38.246 true 00:07:38.246 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:38.246 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.505 15:12:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.764 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:38.764 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:38.764 true 00:07:38.764 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:38.764 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.023 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.281 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:39.281 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:39.540 true 00:07:39.540 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:39.540 15:12:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.798 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.798 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.798 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:40.056 true 00:07:40.056 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:40.056 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.315 15:12:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.575 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:40.575 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.833 true 00:07:40.833 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:40.833 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.833 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.091 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.091 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.350 true 00:07:41.350 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:41.350 15:12:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.608 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.866 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:41.866 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:42.127 true 00:07:42.127 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:42.127 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.385 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.385 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:42.385 15:12:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:42.643 true 00:07:42.643 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:42.643 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.901 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.160 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:43.160 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:43.418 true 00:07:43.418 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:43.418 15:12:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.676 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.676 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:43.676 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:43.934 true 00:07:43.934 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:43.934 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.192 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.450 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:44.450 15:12:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:44.708 true 00:07:44.708 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:44.708 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.708 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.966 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:44.966 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:45.224 true 00:07:45.224 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:45.224 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.482 15:12:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.740 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:45.740 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:45.740 true 00:07:45.740 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:45.740 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.999 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.257 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:46.257 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:46.515 true 00:07:46.515 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:46.515 15:12:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.773 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.773 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:46.773 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:47.033 true 00:07:47.033 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:47.033 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.292 15:12:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.551 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:47.551 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:47.809 true 00:07:47.809 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:47.809 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.068 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.068 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:48.068 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:48.327 true 00:07:48.327 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:48.327 15:12:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.584 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.842 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:48.842 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:49.101 true 00:07:49.101 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:49.101 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.101 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.359 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:49.359 15:12:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:49.617 true 00:07:49.617 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:49.617 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.875 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.133 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:50.133 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:50.392 true 00:07:50.392 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:50.392 15:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.392 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.648 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:50.648 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:50.906 true 00:07:50.906 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:50.906 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.163 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.420 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:51.420 15:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:51.678 true 00:07:51.678 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:51.678 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.937 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.937 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:51.937 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:52.195 true 00:07:52.195 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:52.195 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.454 15:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.711 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:52.711 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:52.966 true 00:07:52.966 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:52.966 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.223 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.223 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:53.223 15:12:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:53.480 true 00:07:53.480 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:53.480 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.738 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.994 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:53.994 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:54.252 true 00:07:54.252 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:54.252 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.509 15:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.509 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:54.509 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:54.766 true 00:07:54.766 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:54.766 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.023 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.282 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:55.282 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:55.540 true 00:07:55.540 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:55.540 15:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.796 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.796 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:55.796 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:56.053 true 00:07:56.053 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:56.053 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.310 15:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.568 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:56.568 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:56.825 true 00:07:56.825 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:56.825 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.083 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.083 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:57.083 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:57.340 true 00:07:57.340 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:57.340 15:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.597 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.855 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:57.855 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:58.112 true 00:07:58.112 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:58.112 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.369 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.370 Initializing NVMe Controllers 00:07:58.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.370 Controller IO queue size 128, less than required. 00:07:58.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:58.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:58.370 Initialization complete. Launching workers. 00:07:58.370 ======================================================== 00:07:58.370 Latency(us) 00:07:58.370 Device Information : IOPS MiB/s Average min max 00:07:58.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23232.26 11.34 5509.59 2753.55 10109.38 00:07:58.370 ======================================================== 00:07:58.370 Total : 23232.26 11.34 5509.59 2753.55 10109.38 00:07:58.370 00:07:58.370 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:58.370 15:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:58.628 true 00:07:58.628 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3673368 00:07:58.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3673368) - No such process 00:07:58.628 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3673368 00:07:58.628 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.885 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.143 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:59.143 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:59.143 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:59.143 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.143 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:59.143 null0 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:59.401 null1 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.401 15:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:59.659 null2 00:07:59.659 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.659 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.659 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.917 null3 00:07:59.917 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.917 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.917 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.917 null4 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:00.175 null5 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.175 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:00.433 null6 00:08:00.433 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.433 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.433 15:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:00.692 null7 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:00.692 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3679537 3679538 3679540 3679543 3679544 3679546 3679548 3679550 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.693 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.951 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.209 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.209 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.209 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.209 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.210 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.210 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.210 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.210 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.469 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.728 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.987 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.245 15:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.503 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.762 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.020 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.021 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.021 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.021 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.279 15:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.538 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.539 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.539 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.539 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.797 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.056 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.315 15:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.573 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.832 rmmod nvme_tcp 00:08:04.832 rmmod nvme_fabrics 00:08:04.832 rmmod nvme_keyring 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3672880 ']' 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3672880 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3672880 ']' 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3672880 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3672880 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3672880' 00:08:04.832 killing process with pid 3672880 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3672880 00:08:04.832 15:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3672880 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.207 15:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.111 00:08:08.111 real 0m49.018s 00:08:08.111 user 3m25.805s 00:08:08.111 sys 0m17.104s 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:08.111 ************************************ 00:08:08.111 END TEST nvmf_ns_hotplug_stress 00:08:08.111 ************************************ 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.111 15:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.371 ************************************ 00:08:08.371 START TEST nvmf_delete_subsystem 00:08:08.371 ************************************ 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:08.371 * Looking for test storage... 00:08:08.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:08.371 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.372 --rc genhtml_branch_coverage=1 00:08:08.372 --rc genhtml_function_coverage=1 00:08:08.372 --rc genhtml_legend=1 00:08:08.372 --rc geninfo_all_blocks=1 00:08:08.372 --rc geninfo_unexecuted_blocks=1 00:08:08.372 00:08:08.372 ' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.372 --rc genhtml_branch_coverage=1 00:08:08.372 --rc genhtml_function_coverage=1 00:08:08.372 --rc genhtml_legend=1 00:08:08.372 --rc geninfo_all_blocks=1 00:08:08.372 --rc geninfo_unexecuted_blocks=1 00:08:08.372 00:08:08.372 ' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.372 --rc genhtml_branch_coverage=1 00:08:08.372 --rc genhtml_function_coverage=1 00:08:08.372 --rc genhtml_legend=1 00:08:08.372 --rc geninfo_all_blocks=1 00:08:08.372 --rc geninfo_unexecuted_blocks=1 00:08:08.372 00:08:08.372 ' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.372 --rc genhtml_branch_coverage=1 00:08:08.372 --rc genhtml_function_coverage=1 00:08:08.372 --rc genhtml_legend=1 00:08:08.372 --rc geninfo_all_blocks=1 00:08:08.372 --rc geninfo_unexecuted_blocks=1 00:08:08.372 00:08:08.372 ' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.372 15:12:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:14.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:14.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:14.939 Found net devices under 0000:86:00.0: cvl_0_0 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.939 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:14.940 Found net devices under 0000:86:00.1: cvl_0_1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:08:14.940 00:08:14.940 --- 10.0.0.2 ping statistics --- 00:08:14.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.940 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:08:14.940 00:08:14.940 --- 10.0.0.1 ping statistics --- 00:08:14.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.940 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.940 15:12:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3684159 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3684159 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3684159 ']' 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.940 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.940 [2024-11-06 15:12:42.082124] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:14.940 [2024-11-06 15:12:42.082248] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.940 [2024-11-06 15:12:42.210143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.940 [2024-11-06 15:12:42.318911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.940 [2024-11-06 15:12:42.318951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.940 [2024-11-06 15:12:42.318963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.940 [2024-11-06 15:12:42.318974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.940 [2024-11-06 15:12:42.318983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.940 [2024-11-06 15:12:42.321175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.940 [2024-11-06 15:12:42.321195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.505 [2024-11-06 15:12:42.920954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.505 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.506 [2024-11-06 15:12:42.941183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.506 NULL1 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.506 Delay0 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3684406 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:15.506 15:12:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:15.506 [2024-11-06 15:12:43.093359] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:17.405 15:12:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.405 15:12:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.406 15:12:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 [2024-11-06 15:12:45.183879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 starting I/O failed: -6 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 [2024-11-06 15:12:45.188333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.664 Write completed with error (sct=0, sc=8) 00:08:17.664 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 [2024-11-06 15:12:45.188904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 Write completed with error (sct=0, sc=8) 00:08:17.665 Read completed with error (sct=0, sc=8) 00:08:17.665 [2024-11-06 15:12:45.189786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020880 is same with the state(6) to be set 00:08:18.599 [2024-11-06 15:12:46.150882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 [2024-11-06 15:12:46.187786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 [2024-11-06 15:12:46.188601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 [2024-11-06 15:12:46.189501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Write completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 Read completed with error (sct=0, sc=8) 00:08:18.599 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.599 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:18.599 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3684406 00:08:18.599 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:18.599 [2024-11-06 15:12:46.195639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020d80 is same with the state(6) to be set 00:08:18.599 Initializing NVMe Controllers 00:08:18.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.599 Controller IO queue size 128, less than required. 00:08:18.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:18.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:18.599 Initialization complete. Launching workers. 00:08:18.599 ======================================================== 00:08:18.599 Latency(us) 00:08:18.599 Device Information : IOPS MiB/s Average min max 00:08:18.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.93 0.09 950889.62 645.93 1009989.57 00:08:18.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.09 0.08 867524.13 591.32 1013215.90 00:08:18.599 ======================================================== 00:08:18.599 Total : 345.02 0.17 912690.45 591.32 1013215.90 00:08:18.599 00:08:18.599 [2024-11-06 15:12:46.201073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e300 (9): Bad file descriptor 00:08:18.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3684406 00:08:19.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3684406) - No such process 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3684406 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3684406 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3684406 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.165 [2024-11-06 15:12:46.719345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3685068 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:19.165 15:12:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.424 [2024-11-06 15:12:46.847108] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:19.682 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.682 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:19.682 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.247 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.247 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:20.247 15:12:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:20.813 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.813 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:20.813 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.378 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.378 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:21.378 15:12:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.636 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.636 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:21.636 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.204 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.204 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:22.204 15:12:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.483 Initializing NVMe Controllers 00:08:22.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:22.483 Controller IO queue size 128, less than required. 00:08:22.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:22.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:22.483 Initialization complete. Launching workers. 00:08:22.483 ======================================================== 00:08:22.483 Latency(us) 00:08:22.483 Device Information : IOPS MiB/s Average min max 00:08:22.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003194.20 1000148.66 1040905.08 00:08:22.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004947.09 1000320.40 1013100.44 00:08:22.483 ======================================================== 00:08:22.483 Total : 256.00 0.12 1004070.65 1000148.66 1040905.08 00:08:22.483 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3685068 00:08:22.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3685068) - No such process 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3685068 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.777 rmmod nvme_tcp 00:08:22.777 rmmod nvme_fabrics 00:08:22.777 rmmod nvme_keyring 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3684159 ']' 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3684159 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3684159 ']' 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3684159 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3684159 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3684159' 00:08:22.777 killing process with pid 3684159 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3684159 00:08:22.777 15:12:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3684159 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.188 15:12:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.093 00:08:26.093 real 0m17.781s 00:08:26.093 user 0m31.872s 00:08:26.093 sys 0m5.750s 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.093 ************************************ 00:08:26.093 END TEST nvmf_delete_subsystem 00:08:26.093 ************************************ 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.093 ************************************ 00:08:26.093 START TEST nvmf_host_management 00:08:26.093 ************************************ 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:26.093 * Looking for test storage... 00:08:26.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.093 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.353 --rc genhtml_branch_coverage=1 00:08:26.353 --rc genhtml_function_coverage=1 00:08:26.353 --rc genhtml_legend=1 00:08:26.353 --rc geninfo_all_blocks=1 00:08:26.353 --rc geninfo_unexecuted_blocks=1 00:08:26.353 00:08:26.353 ' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.353 --rc genhtml_branch_coverage=1 00:08:26.353 --rc genhtml_function_coverage=1 00:08:26.353 --rc genhtml_legend=1 00:08:26.353 --rc geninfo_all_blocks=1 00:08:26.353 --rc geninfo_unexecuted_blocks=1 00:08:26.353 00:08:26.353 ' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.353 --rc genhtml_branch_coverage=1 00:08:26.353 --rc genhtml_function_coverage=1 00:08:26.353 --rc genhtml_legend=1 00:08:26.353 --rc geninfo_all_blocks=1 00:08:26.353 --rc geninfo_unexecuted_blocks=1 00:08:26.353 00:08:26.353 ' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.353 --rc genhtml_branch_coverage=1 00:08:26.353 --rc genhtml_function_coverage=1 00:08:26.353 --rc genhtml_legend=1 00:08:26.353 --rc geninfo_all_blocks=1 00:08:26.353 --rc geninfo_unexecuted_blocks=1 00:08:26.353 00:08:26.353 ' 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.353 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.354 15:12:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:32.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:32.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.922 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:32.923 Found net devices under 0000:86:00.0: cvl_0_0 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:32.923 Found net devices under 0000:86:00.1: cvl_0_1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:08:32.923 00:08:32.923 --- 10.0.0.2 ping statistics --- 00:08:32.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.923 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:08:32.923 00:08:32.923 --- 10.0.0.1 ping statistics --- 00:08:32.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.923 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3689349 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3689349 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3689349 ']' 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.923 15:12:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.923 [2024-11-06 15:12:59.941399] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:32.923 [2024-11-06 15:12:59.941485] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.923 [2024-11-06 15:13:00.080947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.923 [2024-11-06 15:13:00.192873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.923 [2024-11-06 15:13:00.192919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.923 [2024-11-06 15:13:00.192930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.923 [2024-11-06 15:13:00.192941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.923 [2024-11-06 15:13:00.192949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.923 [2024-11-06 15:13:00.195689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.923 [2024-11-06 15:13:00.195784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.923 [2024-11-06 15:13:00.195849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.923 [2024-11-06 15:13:00.195872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.181 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.181 [2024-11-06 15:13:00.816512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.439 Malloc0 00:08:33.439 [2024-11-06 15:13:00.947732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3689614 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3689614 /var/tmp/bdevperf.sock 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3689614 ']' 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.439 15:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.439 { 00:08:33.439 "params": { 00:08:33.439 "name": "Nvme$subsystem", 00:08:33.439 "trtype": "$TEST_TRANSPORT", 00:08:33.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.439 "adrfam": "ipv4", 00:08:33.439 "trsvcid": "$NVMF_PORT", 00:08:33.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.439 "hdgst": ${hdgst:-false}, 00:08:33.439 "ddgst": ${ddgst:-false} 00:08:33.439 }, 00:08:33.439 "method": "bdev_nvme_attach_controller" 00:08:33.439 } 00:08:33.439 EOF 00:08:33.439 )") 00:08:33.439 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:33.439 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:33.439 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:33.439 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.439 "params": { 00:08:33.439 "name": "Nvme0", 00:08:33.439 "trtype": "tcp", 00:08:33.439 "traddr": "10.0.0.2", 00:08:33.439 "adrfam": "ipv4", 00:08:33.439 "trsvcid": "4420", 00:08:33.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:33.439 "hdgst": false, 00:08:33.439 "ddgst": false 00:08:33.439 }, 00:08:33.439 "method": "bdev_nvme_attach_controller" 00:08:33.439 }' 00:08:33.439 [2024-11-06 15:13:01.069655] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:33.439 [2024-11-06 15:13:01.069743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689614 ] 00:08:33.697 [2024-11-06 15:13:01.198718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.697 [2024-11-06 15:13:01.312468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.629 Running I/O for 10 seconds... 00:08:34.629 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.629 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:34.629 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:34.629 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.629 15:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.629 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:34.630 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.890 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.890 [2024-11-06 15:13:02.356630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.356678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.356689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.356698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.356707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.356715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.357367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.890 [2024-11-06 15:13:02.357409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.890 [2024-11-06 15:13:02.357425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.890 [2024-11-06 15:13:02.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.890 [2024-11-06 15:13:02.357446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.890 [2024-11-06 15:13:02.357456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.890 [2024-11-06 15:13:02.357466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.890 [2024-11-06 15:13:02.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.890 [2024-11-06 15:13:02.357484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:08:34.890 [2024-11-06 15:13:02.357551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.890 [2024-11-06 15:13:02.357566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.357987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.357996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.891 [2024-11-06 15:13:02.358168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.891 [2024-11-06 15:13:02.358177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.892 [2024-11-06 15:13:02.358748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.892 [2024-11-06 15:13:02.358757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.358868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.893 [2024-11-06 15:13:02.358877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.893 [2024-11-06 15:13:02.360127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.893 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:34.893 00:08:34.893 Latency(us) 00:08:34.893 [2024-11-06T14:13:02.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.893 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:34.893 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:34.893 Verification LBA range: start 0x0 length 0x400 00:08:34.893 Nvme0n1 : 0.40 1739.14 108.70 158.10 0.00 32783.44 2496.61 31332.45 00:08:34.893 [2024-11-06T14:13:02.531Z] =================================================================================================================== 00:08:34.893 [2024-11-06T14:13:02.531Z] Total : 1739.14 108.70 158.10 0.00 32783.44 2496.61 31332.45 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.893 15:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:34.893 [2024-11-06 15:13:02.376021] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.893 [2024-11-06 15:13:02.376062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:08:34.893 [2024-11-06 15:13:02.425477] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3689614 00:08:35.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3689614) - No such process 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.825 { 00:08:35.825 "params": { 00:08:35.825 "name": "Nvme$subsystem", 00:08:35.825 "trtype": "$TEST_TRANSPORT", 00:08:35.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.825 "adrfam": "ipv4", 00:08:35.825 "trsvcid": "$NVMF_PORT", 00:08:35.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.825 "hdgst": ${hdgst:-false}, 00:08:35.825 "ddgst": ${ddgst:-false} 00:08:35.825 }, 00:08:35.825 "method": "bdev_nvme_attach_controller" 00:08:35.825 } 00:08:35.825 EOF 00:08:35.825 )") 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:35.825 15:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.825 "params": { 00:08:35.825 "name": "Nvme0", 00:08:35.825 "trtype": "tcp", 00:08:35.825 "traddr": "10.0.0.2", 00:08:35.825 "adrfam": "ipv4", 00:08:35.825 "trsvcid": "4420", 00:08:35.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:35.825 "hdgst": false, 00:08:35.825 "ddgst": false 00:08:35.825 }, 00:08:35.825 "method": "bdev_nvme_attach_controller" 00:08:35.825 }' 00:08:35.825 [2024-11-06 15:13:03.450578] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:35.825 [2024-11-06 15:13:03.450663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3690043 ] 00:08:36.082 [2024-11-06 15:13:03.575837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.082 [2024-11-06 15:13:03.691459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.647 Running I/O for 1 seconds... 00:08:38.019 1792.00 IOPS, 112.00 MiB/s 00:08:38.019 Latency(us) 00:08:38.019 [2024-11-06T14:13:05.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.019 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.019 Verification LBA range: start 0x0 length 0x400 00:08:38.019 Nvme0n1 : 1.03 1802.73 112.67 0.00 0.00 34929.19 5336.50 30583.47 00:08:38.019 [2024-11-06T14:13:05.657Z] =================================================================================================================== 00:08:38.019 [2024-11-06T14:13:05.657Z] Total : 1802.73 112.67 0.00 0.00 34929.19 5336.50 30583.47 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.585 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.585 rmmod nvme_tcp 00:08:38.585 rmmod nvme_fabrics 00:08:38.585 rmmod nvme_keyring 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3689349 ']' 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3689349 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3689349 ']' 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3689349 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3689349 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:38.843 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:38.844 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3689349' 00:08:38.844 killing process with pid 3689349 00:08:38.844 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3689349 00:08:38.844 15:13:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3689349 00:08:40.235 [2024-11-06 15:13:07.540138] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.235 15:13:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:42.139 00:08:42.139 real 0m16.063s 00:08:42.139 user 0m34.797s 00:08:42.139 sys 0m6.041s 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.139 ************************************ 00:08:42.139 END TEST nvmf_host_management 00:08:42.139 ************************************ 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.139 ************************************ 00:08:42.139 START TEST nvmf_lvol 00:08:42.139 ************************************ 00:08:42.139 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.399 * Looking for test storage... 00:08:42.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.399 --rc genhtml_branch_coverage=1 00:08:42.399 --rc genhtml_function_coverage=1 00:08:42.399 --rc genhtml_legend=1 00:08:42.399 --rc geninfo_all_blocks=1 00:08:42.399 --rc geninfo_unexecuted_blocks=1 00:08:42.399 00:08:42.399 ' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.399 --rc genhtml_branch_coverage=1 00:08:42.399 --rc genhtml_function_coverage=1 00:08:42.399 --rc genhtml_legend=1 00:08:42.399 --rc geninfo_all_blocks=1 00:08:42.399 --rc geninfo_unexecuted_blocks=1 00:08:42.399 00:08:42.399 ' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.399 --rc genhtml_branch_coverage=1 00:08:42.399 --rc genhtml_function_coverage=1 00:08:42.399 --rc genhtml_legend=1 00:08:42.399 --rc geninfo_all_blocks=1 00:08:42.399 --rc geninfo_unexecuted_blocks=1 00:08:42.399 00:08:42.399 ' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.399 --rc genhtml_branch_coverage=1 00:08:42.399 --rc genhtml_function_coverage=1 00:08:42.399 --rc genhtml_legend=1 00:08:42.399 --rc geninfo_all_blocks=1 00:08:42.399 --rc geninfo_unexecuted_blocks=1 00:08:42.399 00:08:42.399 ' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.399 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.400 15:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.970 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.970 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.970 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.971 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.971 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:08:48.971 00:08:48.971 --- 10.0.0.2 ping statistics --- 00:08:48.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.971 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:48.971 00:08:48.971 --- 10.0.0.1 ping statistics --- 00:08:48.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.971 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3694312 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3694312 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3694312 ']' 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:48.971 15:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.971 [2024-11-06 15:13:16.059253] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:48.971 [2024-11-06 15:13:16.059344] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.971 [2024-11-06 15:13:16.189515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.971 [2024-11-06 15:13:16.295812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.971 [2024-11-06 15:13:16.295859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.971 [2024-11-06 15:13:16.295870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.971 [2024-11-06 15:13:16.295898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.971 [2024-11-06 15:13:16.295908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.971 [2024-11-06 15:13:16.298285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.971 [2024-11-06 15:13:16.298303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.971 [2024-11-06 15:13:16.298328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.229 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.229 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:49.229 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.229 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.229 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.487 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.487 15:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.487 [2024-11-06 15:13:17.066760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.487 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.054 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:50.054 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.054 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:50.054 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:50.312 15:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:50.570 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=60fa1600-5950-432f-bd59-9f8c22883231 00:08:50.570 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 60fa1600-5950-432f-bd59-9f8c22883231 lvol 20 00:08:50.828 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a9429005-0e41-4a4a-aa33-d965ae95047e 00:08:50.828 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.086 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9429005-0e41-4a4a-aa33-d965ae95047e 00:08:51.086 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.343 [2024-11-06 15:13:18.828845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.343 15:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.602 15:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3694817 00:08:51.602 15:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:51.602 15:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:52.537 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a9429005-0e41-4a4a-aa33-d965ae95047e MY_SNAPSHOT 00:08:52.795 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=10f979b7-6c20-4cae-8868-3a17baf35ea5 00:08:52.795 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a9429005-0e41-4a4a-aa33-d965ae95047e 30 00:08:53.054 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 10f979b7-6c20-4cae-8868-3a17baf35ea5 MY_CLONE 00:08:53.313 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2abf2b0d-a9a4-4751-8c0f-2371952d84c3 00:08:53.313 15:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2abf2b0d-a9a4-4751-8c0f-2371952d84c3 00:08:53.878 15:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3694817 00:09:01.985 Initializing NVMe Controllers 00:09:01.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:01.985 Controller IO queue size 128, less than required. 00:09:01.985 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:01.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:01.985 Initialization complete. Launching workers. 00:09:01.985 ======================================================== 00:09:01.985 Latency(us) 00:09:01.985 Device Information : IOPS MiB/s Average min max 00:09:01.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11310.50 44.18 11315.12 513.67 178941.26 00:09:01.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11143.80 43.53 11486.72 4903.31 143415.62 00:09:01.985 ======================================================== 00:09:01.985 Total : 22454.30 87.71 11400.28 513.67 178941.26 00:09:01.985 00:09:01.985 15:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.243 15:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9429005-0e41-4a4a-aa33-d965ae95047e 00:09:02.501 15:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60fa1600-5950-432f-bd59-9f8c22883231 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:02.501 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.502 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.502 rmmod nvme_tcp 00:09:02.759 rmmod nvme_fabrics 00:09:02.759 rmmod nvme_keyring 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3694312 ']' 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3694312 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3694312 ']' 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3694312 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3694312 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3694312' 00:09:02.759 killing process with pid 3694312 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3694312 00:09:02.759 15:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3694312 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.132 15:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.667 00:09:06.667 real 0m24.055s 00:09:06.667 user 1m8.538s 00:09:06.667 sys 0m7.658s 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:06.667 ************************************ 00:09:06.667 END TEST nvmf_lvol 00:09:06.667 ************************************ 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.667 ************************************ 00:09:06.667 START TEST nvmf_lvs_grow 00:09:06.667 ************************************ 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:06.667 * Looking for test storage... 00:09:06.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.667 15:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.667 --rc genhtml_branch_coverage=1 00:09:06.667 --rc genhtml_function_coverage=1 00:09:06.667 --rc genhtml_legend=1 00:09:06.667 --rc geninfo_all_blocks=1 00:09:06.667 --rc geninfo_unexecuted_blocks=1 00:09:06.667 00:09:06.667 ' 00:09:06.667 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.667 --rc genhtml_branch_coverage=1 00:09:06.667 --rc genhtml_function_coverage=1 00:09:06.668 --rc genhtml_legend=1 00:09:06.668 --rc geninfo_all_blocks=1 00:09:06.668 --rc geninfo_unexecuted_blocks=1 00:09:06.668 00:09:06.668 ' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.668 --rc genhtml_branch_coverage=1 00:09:06.668 --rc genhtml_function_coverage=1 00:09:06.668 --rc genhtml_legend=1 00:09:06.668 --rc geninfo_all_blocks=1 00:09:06.668 --rc geninfo_unexecuted_blocks=1 00:09:06.668 00:09:06.668 ' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.668 --rc genhtml_branch_coverage=1 00:09:06.668 --rc genhtml_function_coverage=1 00:09:06.668 --rc genhtml_legend=1 00:09:06.668 --rc geninfo_all_blocks=1 00:09:06.668 --rc geninfo_unexecuted_blocks=1 00:09:06.668 00:09:06.668 ' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.668 15:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.234 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:13.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:13.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:13.235 Found net devices under 0000:86:00.0: cvl_0_0 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:13.235 Found net devices under 0000:86:00.1: cvl_0_1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.235 15:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:09:13.235 00:09:13.235 --- 10.0.0.2 ping statistics --- 00:09:13.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.235 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:13.235 00:09:13.235 --- 10.0.0.1 ping statistics --- 00:09:13.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.235 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:13.235 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3700447 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3700447 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3700447 ']' 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.236 15:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.236 [2024-11-06 15:13:40.186702] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:13.236 [2024-11-06 15:13:40.186782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.236 [2024-11-06 15:13:40.313563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.236 [2024-11-06 15:13:40.421303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.236 [2024-11-06 15:13:40.421353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.236 [2024-11-06 15:13:40.421364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.236 [2024-11-06 15:13:40.421376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.236 [2024-11-06 15:13:40.421384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.236 [2024-11-06 15:13:40.422934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.493 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.493 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:09:13.493 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.493 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.494 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.494 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.494 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.752 [2024-11-06 15:13:41.213500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.752 ************************************ 00:09:13.752 START TEST lvs_grow_clean 00:09:13.752 ************************************ 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:13.752 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.011 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.011 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=83287229-4c77-4497-bbb2-9addb30bca25 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.269 15:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 83287229-4c77-4497-bbb2-9addb30bca25 lvol 150 00:09:14.528 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=60472769-bf3b-4e74-b10d-8ff016f668aa 00:09:14.528 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.528 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:14.786 [2024-11-06 15:13:42.229638] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:14.786 [2024-11-06 15:13:42.229712] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:14.786 true 00:09:14.786 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:14.786 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.044 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.044 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.044 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60472769-bf3b-4e74-b10d-8ff016f668aa 00:09:15.301 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.559 [2024-11-06 15:13:42.959927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.559 15:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.559 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:15.559 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3700954 00:09:15.559 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3700954 /var/tmp/bdevperf.sock 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3700954 ']' 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.560 15:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:15.817 [2024-11-06 15:13:43.214143] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:15.817 [2024-11-06 15:13:43.214239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700954 ] 00:09:15.818 [2024-11-06 15:13:43.335940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.818 [2024-11-06 15:13:43.451569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.752 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.752 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:09:16.752 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.010 Nvme0n1 00:09:17.010 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.010 [ 00:09:17.010 { 00:09:17.010 "name": "Nvme0n1", 00:09:17.010 "aliases": [ 00:09:17.010 "60472769-bf3b-4e74-b10d-8ff016f668aa" 00:09:17.010 ], 00:09:17.010 "product_name": "NVMe disk", 00:09:17.010 "block_size": 4096, 00:09:17.010 "num_blocks": 38912, 00:09:17.010 "uuid": "60472769-bf3b-4e74-b10d-8ff016f668aa", 00:09:17.010 "numa_id": 1, 00:09:17.010 "assigned_rate_limits": { 00:09:17.010 "rw_ios_per_sec": 0, 00:09:17.010 "rw_mbytes_per_sec": 0, 00:09:17.010 "r_mbytes_per_sec": 0, 00:09:17.010 "w_mbytes_per_sec": 0 00:09:17.010 }, 00:09:17.010 "claimed": false, 00:09:17.010 "zoned": false, 00:09:17.010 "supported_io_types": { 00:09:17.010 "read": true, 00:09:17.010 "write": true, 00:09:17.010 "unmap": true, 00:09:17.010 "flush": true, 00:09:17.010 "reset": true, 00:09:17.010 "nvme_admin": true, 00:09:17.010 "nvme_io": true, 00:09:17.010 "nvme_io_md": false, 00:09:17.010 "write_zeroes": true, 00:09:17.010 "zcopy": false, 00:09:17.010 "get_zone_info": false, 00:09:17.010 "zone_management": false, 00:09:17.010 "zone_append": false, 00:09:17.010 "compare": true, 00:09:17.010 "compare_and_write": true, 00:09:17.010 "abort": true, 00:09:17.010 "seek_hole": false, 00:09:17.010 "seek_data": false, 00:09:17.010 "copy": true, 00:09:17.010 "nvme_iov_md": false 00:09:17.010 }, 00:09:17.010 "memory_domains": [ 00:09:17.010 { 00:09:17.010 "dma_device_id": "system", 00:09:17.010 "dma_device_type": 1 00:09:17.010 } 00:09:17.010 ], 00:09:17.010 "driver_specific": { 00:09:17.010 "nvme": [ 00:09:17.010 { 00:09:17.010 "trid": { 00:09:17.010 "trtype": "TCP", 00:09:17.010 "adrfam": "IPv4", 00:09:17.010 "traddr": "10.0.0.2", 00:09:17.010 "trsvcid": "4420", 00:09:17.010 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:17.010 }, 00:09:17.010 "ctrlr_data": { 00:09:17.010 "cntlid": 1, 00:09:17.010 "vendor_id": "0x8086", 00:09:17.010 "model_number": "SPDK bdev Controller", 00:09:17.010 "serial_number": "SPDK0", 00:09:17.010 "firmware_revision": "25.01", 00:09:17.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.010 "oacs": { 00:09:17.010 "security": 0, 00:09:17.010 "format": 0, 00:09:17.010 "firmware": 0, 00:09:17.010 "ns_manage": 0 00:09:17.010 }, 00:09:17.010 "multi_ctrlr": true, 00:09:17.010 "ana_reporting": false 00:09:17.010 }, 00:09:17.010 "vs": { 00:09:17.010 "nvme_version": "1.3" 00:09:17.010 }, 00:09:17.010 "ns_data": { 00:09:17.010 "id": 1, 00:09:17.010 "can_share": true 00:09:17.010 } 00:09:17.010 } 00:09:17.010 ], 00:09:17.010 "mp_policy": "active_passive" 00:09:17.010 } 00:09:17.010 } 00:09:17.010 ] 00:09:17.010 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3701189 00:09:17.010 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.010 15:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.268 Running I/O for 10 seconds... 00:09:18.202 Latency(us) 00:09:18.202 [2024-11-06T14:13:45.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.202 Nvme0n1 : 1.00 20201.00 78.91 0.00 0.00 0.00 0.00 0.00 00:09:18.202 [2024-11-06T14:13:45.840Z] =================================================================================================================== 00:09:18.202 [2024-11-06T14:13:45.840Z] Total : 20201.00 78.91 0.00 0.00 0.00 0.00 0.00 00:09:18.202 00:09:19.136 15:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:19.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.136 Nvme0n1 : 2.00 20324.00 79.39 0.00 0.00 0.00 0.00 0.00 00:09:19.136 [2024-11-06T14:13:46.774Z] =================================================================================================================== 00:09:19.136 [2024-11-06T14:13:46.774Z] Total : 20324.00 79.39 0.00 0.00 0.00 0.00 0.00 00:09:19.136 00:09:19.434 true 00:09:19.434 15:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:19.434 15:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:19.434 15:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:19.434 15:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:19.434 15:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3701189 00:09:20.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.079 Nvme0n1 : 3.00 20353.00 79.50 0.00 0.00 0.00 0.00 0.00 00:09:20.079 [2024-11-06T14:13:47.717Z] =================================================================================================================== 00:09:20.079 [2024-11-06T14:13:47.717Z] Total : 20353.00 79.50 0.00 0.00 0.00 0.00 0.00 00:09:20.079 00:09:21.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.451 Nvme0n1 : 4.00 20440.00 79.84 0.00 0.00 0.00 0.00 0.00 00:09:21.451 [2024-11-06T14:13:49.089Z] =================================================================================================================== 00:09:21.451 [2024-11-06T14:13:49.089Z] Total : 20440.00 79.84 0.00 0.00 0.00 0.00 0.00 00:09:21.451 00:09:22.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.386 Nvme0n1 : 5.00 20489.20 80.04 0.00 0.00 0.00 0.00 0.00 00:09:22.386 [2024-11-06T14:13:50.024Z] =================================================================================================================== 00:09:22.386 [2024-11-06T14:13:50.024Z] Total : 20489.20 80.04 0.00 0.00 0.00 0.00 0.00 00:09:22.386 00:09:23.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.320 Nvme0n1 : 6.00 20473.00 79.97 0.00 0.00 0.00 0.00 0.00 00:09:23.320 [2024-11-06T14:13:50.958Z] =================================================================================================================== 00:09:23.320 [2024-11-06T14:13:50.958Z] Total : 20473.00 79.97 0.00 0.00 0.00 0.00 0.00 00:09:23.320 00:09:24.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.253 Nvme0n1 : 7.00 20499.57 80.08 0.00 0.00 0.00 0.00 0.00 00:09:24.254 [2024-11-06T14:13:51.892Z] =================================================================================================================== 00:09:24.254 [2024-11-06T14:13:51.892Z] Total : 20499.57 80.08 0.00 0.00 0.00 0.00 0.00 00:09:24.254 00:09:25.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.186 Nvme0n1 : 8.00 20533.50 80.21 0.00 0.00 0.00 0.00 0.00 00:09:25.186 [2024-11-06T14:13:52.824Z] =================================================================================================================== 00:09:25.186 [2024-11-06T14:13:52.824Z] Total : 20533.50 80.21 0.00 0.00 0.00 0.00 0.00 00:09:25.186 00:09:26.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.120 Nvme0n1 : 9.00 20559.44 80.31 0.00 0.00 0.00 0.00 0.00 00:09:26.120 [2024-11-06T14:13:53.758Z] =================================================================================================================== 00:09:26.120 [2024-11-06T14:13:53.758Z] Total : 20559.44 80.31 0.00 0.00 0.00 0.00 0.00 00:09:26.120 00:09:27.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.494 Nvme0n1 : 10.00 20561.50 80.32 0.00 0.00 0.00 0.00 0.00 00:09:27.494 [2024-11-06T14:13:55.132Z] =================================================================================================================== 00:09:27.494 [2024-11-06T14:13:55.132Z] Total : 20561.50 80.32 0.00 0.00 0.00 0.00 0.00 00:09:27.494 00:09:27.494 00:09:27.494 Latency(us) 00:09:27.494 [2024-11-06T14:13:55.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.494 Nvme0n1 : 10.00 20565.42 80.33 0.00 0.00 6220.86 3620.08 15541.39 00:09:27.494 [2024-11-06T14:13:55.132Z] =================================================================================================================== 00:09:27.494 [2024-11-06T14:13:55.132Z] Total : 20565.42 80.33 0.00 0.00 6220.86 3620.08 15541.39 00:09:27.494 { 00:09:27.494 "results": [ 00:09:27.494 { 00:09:27.494 "job": "Nvme0n1", 00:09:27.494 "core_mask": "0x2", 00:09:27.494 "workload": "randwrite", 00:09:27.494 "status": "finished", 00:09:27.494 "queue_depth": 128, 00:09:27.494 "io_size": 4096, 00:09:27.494 "runtime": 10.004316, 00:09:27.494 "iops": 20565.42396301756, 00:09:27.494 "mibps": 80.33368735553735, 00:09:27.494 "io_failed": 0, 00:09:27.494 "io_timeout": 0, 00:09:27.494 "avg_latency_us": 6220.862714801615, 00:09:27.494 "min_latency_us": 3620.0838095238096, 00:09:27.494 "max_latency_us": 15541.394285714287 00:09:27.494 } 00:09:27.494 ], 00:09:27.494 "core_count": 1 00:09:27.494 } 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3700954 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3700954 ']' 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3700954 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3700954 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3700954' 00:09:27.494 killing process with pid 3700954 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3700954 00:09:27.494 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.494 00:09:27.494 Latency(us) 00:09:27.494 [2024-11-06T14:13:55.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.494 [2024-11-06T14:13:55.132Z] =================================================================================================================== 00:09:27.494 [2024-11-06T14:13:55.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.494 15:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3700954 00:09:28.060 15:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.318 15:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.576 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:28.576 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.834 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.834 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:28.834 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.834 [2024-11-06 15:13:56.423173] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.835 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:29.093 request: 00:09:29.093 { 00:09:29.093 "uuid": "83287229-4c77-4497-bbb2-9addb30bca25", 00:09:29.093 "method": "bdev_lvol_get_lvstores", 00:09:29.093 "req_id": 1 00:09:29.093 } 00:09:29.093 Got JSON-RPC error response 00:09:29.093 response: 00:09:29.093 { 00:09:29.093 "code": -19, 00:09:29.093 "message": "No such device" 00:09:29.093 } 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.093 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.351 aio_bdev 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60472769-bf3b-4e74-b10d-8ff016f668aa 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=60472769-bf3b-4e74-b10d-8ff016f668aa 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:29.351 15:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.609 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60472769-bf3b-4e74-b10d-8ff016f668aa -t 2000 00:09:29.609 [ 00:09:29.609 { 00:09:29.609 "name": "60472769-bf3b-4e74-b10d-8ff016f668aa", 00:09:29.609 "aliases": [ 00:09:29.609 "lvs/lvol" 00:09:29.609 ], 00:09:29.609 "product_name": "Logical Volume", 00:09:29.609 "block_size": 4096, 00:09:29.609 "num_blocks": 38912, 00:09:29.609 "uuid": "60472769-bf3b-4e74-b10d-8ff016f668aa", 00:09:29.609 "assigned_rate_limits": { 00:09:29.609 "rw_ios_per_sec": 0, 00:09:29.609 "rw_mbytes_per_sec": 0, 00:09:29.609 "r_mbytes_per_sec": 0, 00:09:29.609 "w_mbytes_per_sec": 0 00:09:29.609 }, 00:09:29.609 "claimed": false, 00:09:29.609 "zoned": false, 00:09:29.609 "supported_io_types": { 00:09:29.609 "read": true, 00:09:29.609 "write": true, 00:09:29.609 "unmap": true, 00:09:29.609 "flush": false, 00:09:29.609 "reset": true, 00:09:29.609 "nvme_admin": false, 00:09:29.609 "nvme_io": false, 00:09:29.609 "nvme_io_md": false, 00:09:29.609 "write_zeroes": true, 00:09:29.609 "zcopy": false, 00:09:29.609 "get_zone_info": false, 00:09:29.609 "zone_management": false, 00:09:29.609 "zone_append": false, 00:09:29.609 "compare": false, 00:09:29.609 "compare_and_write": false, 00:09:29.609 "abort": false, 00:09:29.609 "seek_hole": true, 00:09:29.609 "seek_data": true, 00:09:29.609 "copy": false, 00:09:29.609 "nvme_iov_md": false 00:09:29.609 }, 00:09:29.609 "driver_specific": { 00:09:29.609 "lvol": { 00:09:29.609 "lvol_store_uuid": "83287229-4c77-4497-bbb2-9addb30bca25", 00:09:29.609 "base_bdev": "aio_bdev", 00:09:29.609 "thin_provision": false, 00:09:29.609 "num_allocated_clusters": 38, 00:09:29.609 "snapshot": false, 00:09:29.609 "clone": false, 00:09:29.609 "esnap_clone": false 00:09:29.609 } 00:09:29.609 } 00:09:29.609 } 00:09:29.609 ] 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:29.867 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:30.125 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:30.125 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60472769-bf3b-4e74-b10d-8ff016f668aa 00:09:30.383 15:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83287229-4c77-4497-bbb2-9addb30bca25 00:09:30.641 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.641 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.641 00:09:30.641 real 0m16.972s 00:09:30.641 user 0m16.629s 00:09:30.641 sys 0m1.540s 00:09:30.641 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.641 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.641 ************************************ 00:09:30.641 END TEST lvs_grow_clean 00:09:30.641 ************************************ 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.899 ************************************ 00:09:30.899 START TEST lvs_grow_dirty 00:09:30.899 ************************************ 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:30.899 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.158 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:31.158 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:31.158 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:31.158 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:31.158 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:31.416 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:31.416 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:31.416 15:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa841146-fc99-4d4d-9dde-81bfdd0f775c lvol 150 00:09:31.673 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:31.673 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:31.673 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:31.673 [2024-11-06 15:13:59.282142] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:31.673 [2024-11-06 15:13:59.282239] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:31.673 true 00:09:31.673 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:31.673 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:31.931 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:31.932 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:32.188 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:32.445 15:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:32.445 [2024-11-06 15:14:00.008457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.445 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3704006 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3704006 /var/tmp/bdevperf.sock 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3704006 ']' 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:32.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.703 15:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.703 [2024-11-06 15:14:00.287050] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:32.703 [2024-11-06 15:14:00.287136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704006 ] 00:09:32.962 [2024-11-06 15:14:00.409319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.962 [2024-11-06 15:14:00.520307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.527 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.527 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:33.527 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:33.785 Nvme0n1 00:09:33.785 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:34.044 [ 00:09:34.044 { 00:09:34.044 "name": "Nvme0n1", 00:09:34.044 "aliases": [ 00:09:34.044 "3b952b2d-b97c-42b1-adb8-32c5c002f3ee" 00:09:34.044 ], 00:09:34.044 "product_name": "NVMe disk", 00:09:34.044 "block_size": 4096, 00:09:34.044 "num_blocks": 38912, 00:09:34.044 "uuid": "3b952b2d-b97c-42b1-adb8-32c5c002f3ee", 00:09:34.044 "numa_id": 1, 00:09:34.044 "assigned_rate_limits": { 00:09:34.044 "rw_ios_per_sec": 0, 00:09:34.044 "rw_mbytes_per_sec": 0, 00:09:34.044 "r_mbytes_per_sec": 0, 00:09:34.044 "w_mbytes_per_sec": 0 00:09:34.044 }, 00:09:34.044 "claimed": false, 00:09:34.044 "zoned": false, 00:09:34.044 "supported_io_types": { 00:09:34.044 "read": true, 00:09:34.044 "write": true, 00:09:34.044 "unmap": true, 00:09:34.044 "flush": true, 00:09:34.044 "reset": true, 00:09:34.044 "nvme_admin": true, 00:09:34.044 "nvme_io": true, 00:09:34.044 "nvme_io_md": false, 00:09:34.044 "write_zeroes": true, 00:09:34.044 "zcopy": false, 00:09:34.044 "get_zone_info": false, 00:09:34.044 "zone_management": false, 00:09:34.044 "zone_append": false, 00:09:34.044 "compare": true, 00:09:34.044 "compare_and_write": true, 00:09:34.044 "abort": true, 00:09:34.044 "seek_hole": false, 00:09:34.044 "seek_data": false, 00:09:34.044 "copy": true, 00:09:34.044 "nvme_iov_md": false 00:09:34.044 }, 00:09:34.044 "memory_domains": [ 00:09:34.044 { 00:09:34.044 "dma_device_id": "system", 00:09:34.044 "dma_device_type": 1 00:09:34.044 } 00:09:34.044 ], 00:09:34.044 "driver_specific": { 00:09:34.044 "nvme": [ 00:09:34.044 { 00:09:34.044 "trid": { 00:09:34.044 "trtype": "TCP", 00:09:34.044 "adrfam": "IPv4", 00:09:34.044 "traddr": "10.0.0.2", 00:09:34.044 "trsvcid": "4420", 00:09:34.044 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:34.044 }, 00:09:34.044 "ctrlr_data": { 00:09:34.044 "cntlid": 1, 00:09:34.044 "vendor_id": "0x8086", 00:09:34.044 "model_number": "SPDK bdev Controller", 00:09:34.044 "serial_number": "SPDK0", 00:09:34.044 "firmware_revision": "25.01", 00:09:34.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:34.044 "oacs": { 00:09:34.044 "security": 0, 00:09:34.044 "format": 0, 00:09:34.044 "firmware": 0, 00:09:34.044 "ns_manage": 0 00:09:34.044 }, 00:09:34.044 "multi_ctrlr": true, 00:09:34.044 "ana_reporting": false 00:09:34.044 }, 00:09:34.044 "vs": { 00:09:34.044 "nvme_version": "1.3" 00:09:34.044 }, 00:09:34.044 "ns_data": { 00:09:34.044 "id": 1, 00:09:34.044 "can_share": true 00:09:34.044 } 00:09:34.044 } 00:09:34.044 ], 00:09:34.044 "mp_policy": "active_passive" 00:09:34.044 } 00:09:34.044 } 00:09:34.044 ] 00:09:34.044 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3704207 00:09:34.044 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.044 15:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:34.044 Running I/O for 10 seconds... 00:09:35.422 Latency(us) 00:09:35.422 [2024-11-06T14:14:03.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.422 Nvme0n1 : 1.00 20392.00 79.66 0.00 0.00 0.00 0.00 0.00 00:09:35.422 [2024-11-06T14:14:03.060Z] =================================================================================================================== 00:09:35.422 [2024-11-06T14:14:03.060Z] Total : 20392.00 79.66 0.00 0.00 0.00 0.00 0.00 00:09:35.422 00:09:35.989 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:36.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.247 Nvme0n1 : 2.00 20328.00 79.41 0.00 0.00 0.00 0.00 0.00 00:09:36.247 [2024-11-06T14:14:03.885Z] =================================================================================================================== 00:09:36.247 [2024-11-06T14:14:03.885Z] Total : 20328.00 79.41 0.00 0.00 0.00 0.00 0.00 00:09:36.247 00:09:36.247 true 00:09:36.247 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:36.247 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:36.506 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:36.506 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:36.506 15:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3704207 00:09:37.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.074 Nvme0n1 : 3.00 20326.33 79.40 0.00 0.00 0.00 0.00 0.00 00:09:37.074 [2024-11-06T14:14:04.712Z] =================================================================================================================== 00:09:37.074 [2024-11-06T14:14:04.712Z] Total : 20326.33 79.40 0.00 0.00 0.00 0.00 0.00 00:09:37.074 00:09:38.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.010 Nvme0n1 : 4.00 20420.75 79.77 0.00 0.00 0.00 0.00 0.00 00:09:38.010 [2024-11-06T14:14:05.648Z] =================================================================================================================== 00:09:38.010 [2024-11-06T14:14:05.648Z] Total : 20420.75 79.77 0.00 0.00 0.00 0.00 0.00 00:09:38.010 00:09:39.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.387 Nvme0n1 : 5.00 20451.40 79.89 0.00 0.00 0.00 0.00 0.00 00:09:39.387 [2024-11-06T14:14:07.025Z] =================================================================================================================== 00:09:39.387 [2024-11-06T14:14:07.025Z] Total : 20451.40 79.89 0.00 0.00 0.00 0.00 0.00 00:09:39.387 00:09:40.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.324 Nvme0n1 : 6.00 20471.83 79.97 0.00 0.00 0.00 0.00 0.00 00:09:40.324 [2024-11-06T14:14:07.962Z] =================================================================================================================== 00:09:40.324 [2024-11-06T14:14:07.962Z] Total : 20471.83 79.97 0.00 0.00 0.00 0.00 0.00 00:09:40.324 00:09:41.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.261 Nvme0n1 : 7.00 20504.57 80.10 0.00 0.00 0.00 0.00 0.00 00:09:41.261 [2024-11-06T14:14:08.899Z] =================================================================================================================== 00:09:41.261 [2024-11-06T14:14:08.899Z] Total : 20504.57 80.10 0.00 0.00 0.00 0.00 0.00 00:09:41.261 00:09:42.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.204 Nvme0n1 : 8.00 20529.12 80.19 0.00 0.00 0.00 0.00 0.00 00:09:42.204 [2024-11-06T14:14:09.843Z] =================================================================================================================== 00:09:42.205 [2024-11-06T14:14:09.843Z] Total : 20529.12 80.19 0.00 0.00 0.00 0.00 0.00 00:09:42.205 00:09:43.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.143 Nvme0n1 : 9.00 20520.56 80.16 0.00 0.00 0.00 0.00 0.00 00:09:43.143 [2024-11-06T14:14:10.781Z] =================================================================================================================== 00:09:43.143 [2024-11-06T14:14:10.781Z] Total : 20520.56 80.16 0.00 0.00 0.00 0.00 0.00 00:09:43.143 00:09:44.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.079 Nvme0n1 : 10.00 20538.60 80.23 0.00 0.00 0.00 0.00 0.00 00:09:44.079 [2024-11-06T14:14:11.717Z] =================================================================================================================== 00:09:44.079 [2024-11-06T14:14:11.717Z] Total : 20538.60 80.23 0.00 0.00 0.00 0.00 0.00 00:09:44.079 00:09:44.079 00:09:44.079 Latency(us) 00:09:44.079 [2024-11-06T14:14:11.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.079 Nvme0n1 : 10.01 20537.79 80.23 0.00 0.00 6229.08 2949.12 12670.29 00:09:44.079 [2024-11-06T14:14:11.717Z] =================================================================================================================== 00:09:44.079 [2024-11-06T14:14:11.717Z] Total : 20537.79 80.23 0.00 0.00 6229.08 2949.12 12670.29 00:09:44.079 { 00:09:44.079 "results": [ 00:09:44.079 { 00:09:44.079 "job": "Nvme0n1", 00:09:44.079 "core_mask": "0x2", 00:09:44.079 "workload": "randwrite", 00:09:44.079 "status": "finished", 00:09:44.079 "queue_depth": 128, 00:09:44.079 "io_size": 4096, 00:09:44.079 "runtime": 10.006629, 00:09:44.079 "iops": 20537.78550199073, 00:09:44.079 "mibps": 80.22572461715129, 00:09:44.079 "io_failed": 0, 00:09:44.079 "io_timeout": 0, 00:09:44.079 "avg_latency_us": 6229.08078997283, 00:09:44.079 "min_latency_us": 2949.12, 00:09:44.079 "max_latency_us": 12670.293333333333 00:09:44.079 } 00:09:44.079 ], 00:09:44.079 "core_count": 1 00:09:44.079 } 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3704006 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3704006 ']' 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3704006 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.079 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3704006 00:09:44.338 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:44.338 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:44.338 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3704006' 00:09:44.338 killing process with pid 3704006 00:09:44.338 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3704006 00:09:44.338 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.338 00:09:44.338 Latency(us) 00:09:44.338 [2024-11-06T14:14:11.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.338 [2024-11-06T14:14:11.976Z] =================================================================================================================== 00:09:44.338 [2024-11-06T14:14:11.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:44.338 15:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3704006 00:09:45.274 15:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.274 15:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:45.533 15:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:45.533 15:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3700447 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3700447 00:09:45.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3700447 Killed "${NVMF_APP[@]}" "$@" 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3706094 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3706094 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3706094 ']' 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.792 15:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.792 [2024-11-06 15:14:13.335257] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:45.792 [2024-11-06 15:14:13.335344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.051 [2024-11-06 15:14:13.471775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.051 [2024-11-06 15:14:13.574004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.051 [2024-11-06 15:14:13.574049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.051 [2024-11-06 15:14:13.574060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.051 [2024-11-06 15:14:13.574085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.051 [2024-11-06 15:14:13.574096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.051 [2024-11-06 15:14:13.575539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.618 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.876 [2024-11-06 15:14:14.344385] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:46.876 [2024-11-06 15:14:14.344549] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:46.876 [2024-11-06 15:14:14.344586] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:46.876 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:47.135 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b952b2d-b97c-42b1-adb8-32c5c002f3ee -t 2000 00:09:47.135 [ 00:09:47.135 { 00:09:47.135 "name": "3b952b2d-b97c-42b1-adb8-32c5c002f3ee", 00:09:47.135 "aliases": [ 00:09:47.135 "lvs/lvol" 00:09:47.135 ], 00:09:47.135 "product_name": "Logical Volume", 00:09:47.135 "block_size": 4096, 00:09:47.135 "num_blocks": 38912, 00:09:47.135 "uuid": "3b952b2d-b97c-42b1-adb8-32c5c002f3ee", 00:09:47.135 "assigned_rate_limits": { 00:09:47.135 "rw_ios_per_sec": 0, 00:09:47.135 "rw_mbytes_per_sec": 0, 00:09:47.135 "r_mbytes_per_sec": 0, 00:09:47.135 "w_mbytes_per_sec": 0 00:09:47.135 }, 00:09:47.135 "claimed": false, 00:09:47.135 "zoned": false, 00:09:47.135 "supported_io_types": { 00:09:47.135 "read": true, 00:09:47.135 "write": true, 00:09:47.135 "unmap": true, 00:09:47.135 "flush": false, 00:09:47.135 "reset": true, 00:09:47.135 "nvme_admin": false, 00:09:47.135 "nvme_io": false, 00:09:47.135 "nvme_io_md": false, 00:09:47.135 "write_zeroes": true, 00:09:47.135 "zcopy": false, 00:09:47.135 "get_zone_info": false, 00:09:47.135 "zone_management": false, 00:09:47.135 "zone_append": false, 00:09:47.135 "compare": false, 00:09:47.135 "compare_and_write": false, 00:09:47.135 "abort": false, 00:09:47.135 "seek_hole": true, 00:09:47.135 "seek_data": true, 00:09:47.135 "copy": false, 00:09:47.135 "nvme_iov_md": false 00:09:47.135 }, 00:09:47.135 "driver_specific": { 00:09:47.135 "lvol": { 00:09:47.135 "lvol_store_uuid": "fa841146-fc99-4d4d-9dde-81bfdd0f775c", 00:09:47.135 "base_bdev": "aio_bdev", 00:09:47.135 "thin_provision": false, 00:09:47.135 "num_allocated_clusters": 38, 00:09:47.135 "snapshot": false, 00:09:47.135 "clone": false, 00:09:47.135 "esnap_clone": false 00:09:47.135 } 00:09:47.135 } 00:09:47.135 } 00:09:47.135 ] 00:09:47.135 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:47.135 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:47.135 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:47.394 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:47.394 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:47.394 15:14:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:47.652 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:47.652 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:47.912 [2024-11-06 15:14:15.292817] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:47.912 request: 00:09:47.912 { 00:09:47.912 "uuid": "fa841146-fc99-4d4d-9dde-81bfdd0f775c", 00:09:47.912 "method": "bdev_lvol_get_lvstores", 00:09:47.912 "req_id": 1 00:09:47.912 } 00:09:47.912 Got JSON-RPC error response 00:09:47.912 response: 00:09:47.912 { 00:09:47.912 "code": -19, 00:09:47.912 "message": "No such device" 00:09:47.912 } 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:47.912 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.172 aio_bdev 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:48.172 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:48.430 15:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b952b2d-b97c-42b1-adb8-32c5c002f3ee -t 2000 00:09:48.689 [ 00:09:48.689 { 00:09:48.689 "name": "3b952b2d-b97c-42b1-adb8-32c5c002f3ee", 00:09:48.689 "aliases": [ 00:09:48.689 "lvs/lvol" 00:09:48.689 ], 00:09:48.689 "product_name": "Logical Volume", 00:09:48.689 "block_size": 4096, 00:09:48.689 "num_blocks": 38912, 00:09:48.689 "uuid": "3b952b2d-b97c-42b1-adb8-32c5c002f3ee", 00:09:48.689 "assigned_rate_limits": { 00:09:48.689 "rw_ios_per_sec": 0, 00:09:48.689 "rw_mbytes_per_sec": 0, 00:09:48.689 "r_mbytes_per_sec": 0, 00:09:48.689 "w_mbytes_per_sec": 0 00:09:48.689 }, 00:09:48.689 "claimed": false, 00:09:48.689 "zoned": false, 00:09:48.689 "supported_io_types": { 00:09:48.689 "read": true, 00:09:48.689 "write": true, 00:09:48.689 "unmap": true, 00:09:48.689 "flush": false, 00:09:48.689 "reset": true, 00:09:48.690 "nvme_admin": false, 00:09:48.690 "nvme_io": false, 00:09:48.690 "nvme_io_md": false, 00:09:48.690 "write_zeroes": true, 00:09:48.690 "zcopy": false, 00:09:48.690 "get_zone_info": false, 00:09:48.690 "zone_management": false, 00:09:48.690 "zone_append": false, 00:09:48.690 "compare": false, 00:09:48.690 "compare_and_write": false, 00:09:48.690 "abort": false, 00:09:48.690 "seek_hole": true, 00:09:48.690 "seek_data": true, 00:09:48.690 "copy": false, 00:09:48.690 "nvme_iov_md": false 00:09:48.690 }, 00:09:48.690 "driver_specific": { 00:09:48.690 "lvol": { 00:09:48.690 "lvol_store_uuid": "fa841146-fc99-4d4d-9dde-81bfdd0f775c", 00:09:48.690 "base_bdev": "aio_bdev", 00:09:48.690 "thin_provision": false, 00:09:48.690 "num_allocated_clusters": 38, 00:09:48.690 "snapshot": false, 00:09:48.690 "clone": false, 00:09:48.690 "esnap_clone": false 00:09:48.690 } 00:09:48.690 } 00:09:48.690 } 00:09:48.690 ] 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:48.690 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:48.948 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:48.948 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b952b2d-b97c-42b1-adb8-32c5c002f3ee 00:09:49.207 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa841146-fc99-4d4d-9dde-81bfdd0f775c 00:09:49.466 15:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:49.466 00:09:49.466 real 0m18.729s 00:09:49.466 user 0m48.305s 00:09:49.466 sys 0m3.924s 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.466 ************************************ 00:09:49.466 END TEST lvs_grow_dirty 00:09:49.466 ************************************ 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:49.466 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:49.466 nvmf_trace.0 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.725 rmmod nvme_tcp 00:09:49.725 rmmod nvme_fabrics 00:09:49.725 rmmod nvme_keyring 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3706094 ']' 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3706094 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3706094 ']' 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3706094 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3706094 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3706094' 00:09:49.725 killing process with pid 3706094 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3706094 00:09:49.725 15:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3706094 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.102 15:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.006 00:09:53.006 real 0m46.488s 00:09:53.006 user 1m12.053s 00:09:53.006 sys 0m10.516s 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:53.006 ************************************ 00:09:53.006 END TEST nvmf_lvs_grow 00:09:53.006 ************************************ 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.006 ************************************ 00:09:53.006 START TEST nvmf_bdev_io_wait 00:09:53.006 ************************************ 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:53.006 * Looking for test storage... 00:09:53.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.006 --rc genhtml_branch_coverage=1 00:09:53.006 --rc genhtml_function_coverage=1 00:09:53.006 --rc genhtml_legend=1 00:09:53.006 --rc geninfo_all_blocks=1 00:09:53.006 --rc geninfo_unexecuted_blocks=1 00:09:53.006 00:09:53.006 ' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.006 --rc genhtml_branch_coverage=1 00:09:53.006 --rc genhtml_function_coverage=1 00:09:53.006 --rc genhtml_legend=1 00:09:53.006 --rc geninfo_all_blocks=1 00:09:53.006 --rc geninfo_unexecuted_blocks=1 00:09:53.006 00:09:53.006 ' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.006 --rc genhtml_branch_coverage=1 00:09:53.006 --rc genhtml_function_coverage=1 00:09:53.006 --rc genhtml_legend=1 00:09:53.006 --rc geninfo_all_blocks=1 00:09:53.006 --rc geninfo_unexecuted_blocks=1 00:09:53.006 00:09:53.006 ' 00:09:53.006 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.007 --rc genhtml_branch_coverage=1 00:09:53.007 --rc genhtml_function_coverage=1 00:09:53.007 --rc genhtml_legend=1 00:09:53.007 --rc geninfo_all_blocks=1 00:09:53.007 --rc geninfo_unexecuted_blocks=1 00:09:53.007 00:09:53.007 ' 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.007 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.266 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.267 15:14:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:59.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.833 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:59.834 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:59.834 Found net devices under 0000:86:00.0: cvl_0_0 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:59.834 Found net devices under 0000:86:00.1: cvl_0_1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:09:59.834 00:09:59.834 --- 10.0.0.2 ping statistics --- 00:09:59.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.834 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:09:59.834 00:09:59.834 --- 10.0.0.1 ping statistics --- 00:09:59.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.834 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3710604 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3710604 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3710604 ']' 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.834 15:14:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.834 [2024-11-06 15:14:26.790890] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:59.834 [2024-11-06 15:14:26.790977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.834 [2024-11-06 15:14:26.920275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.834 [2024-11-06 15:14:27.029622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.834 [2024-11-06 15:14:27.029667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.834 [2024-11-06 15:14:27.029678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.834 [2024-11-06 15:14:27.029688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.834 [2024-11-06 15:14:27.029696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.834 [2024-11-06 15:14:27.032166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.834 [2024-11-06 15:14:27.032264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.834 [2024-11-06 15:14:27.032324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.835 [2024-11-06 15:14:27.032339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.093 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 [2024-11-06 15:14:27.867262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 Malloc0 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.352 [2024-11-06 15:14:27.967028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3710806 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3710809 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.352 { 00:10:00.352 "params": { 00:10:00.352 "name": "Nvme$subsystem", 00:10:00.352 "trtype": "$TEST_TRANSPORT", 00:10:00.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.352 "adrfam": "ipv4", 00:10:00.352 "trsvcid": "$NVMF_PORT", 00:10:00.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.352 "hdgst": ${hdgst:-false}, 00:10:00.352 "ddgst": ${ddgst:-false} 00:10:00.352 }, 00:10:00.352 "method": "bdev_nvme_attach_controller" 00:10:00.352 } 00:10:00.352 EOF 00:10:00.352 )") 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:00.352 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3710812 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.353 { 00:10:00.353 "params": { 00:10:00.353 "name": "Nvme$subsystem", 00:10:00.353 "trtype": "$TEST_TRANSPORT", 00:10:00.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.353 "adrfam": "ipv4", 00:10:00.353 "trsvcid": "$NVMF_PORT", 00:10:00.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.353 "hdgst": ${hdgst:-false}, 00:10:00.353 "ddgst": ${ddgst:-false} 00:10:00.353 }, 00:10:00.353 "method": "bdev_nvme_attach_controller" 00:10:00.353 } 00:10:00.353 EOF 00:10:00.353 )") 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3710816 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.353 { 00:10:00.353 "params": { 00:10:00.353 "name": "Nvme$subsystem", 00:10:00.353 "trtype": "$TEST_TRANSPORT", 00:10:00.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.353 "adrfam": "ipv4", 00:10:00.353 "trsvcid": "$NVMF_PORT", 00:10:00.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.353 "hdgst": ${hdgst:-false}, 00:10:00.353 "ddgst": ${ddgst:-false} 00:10:00.353 }, 00:10:00.353 "method": "bdev_nvme_attach_controller" 00:10:00.353 } 00:10:00.353 EOF 00:10:00.353 )") 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:00.353 { 00:10:00.353 "params": { 00:10:00.353 "name": "Nvme$subsystem", 00:10:00.353 "trtype": "$TEST_TRANSPORT", 00:10:00.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.353 "adrfam": "ipv4", 00:10:00.353 "trsvcid": "$NVMF_PORT", 00:10:00.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.353 "hdgst": ${hdgst:-false}, 00:10:00.353 "ddgst": ${ddgst:-false} 00:10:00.353 }, 00:10:00.353 "method": "bdev_nvme_attach_controller" 00:10:00.353 } 00:10:00.353 EOF 00:10:00.353 )") 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3710806 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.353 "params": { 00:10:00.353 "name": "Nvme1", 00:10:00.353 "trtype": "tcp", 00:10:00.353 "traddr": "10.0.0.2", 00:10:00.353 "adrfam": "ipv4", 00:10:00.353 "trsvcid": "4420", 00:10:00.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.353 "hdgst": false, 00:10:00.353 "ddgst": false 00:10:00.353 }, 00:10:00.353 "method": "bdev_nvme_attach_controller" 00:10:00.353 }' 00:10:00.353 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.612 "params": { 00:10:00.612 "name": "Nvme1", 00:10:00.612 "trtype": "tcp", 00:10:00.612 "traddr": "10.0.0.2", 00:10:00.612 "adrfam": "ipv4", 00:10:00.612 "trsvcid": "4420", 00:10:00.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.612 "hdgst": false, 00:10:00.612 "ddgst": false 00:10:00.612 }, 00:10:00.612 "method": "bdev_nvme_attach_controller" 00:10:00.612 }' 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.612 "params": { 00:10:00.612 "name": "Nvme1", 00:10:00.612 "trtype": "tcp", 00:10:00.612 "traddr": "10.0.0.2", 00:10:00.612 "adrfam": "ipv4", 00:10:00.612 "trsvcid": "4420", 00:10:00.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.612 "hdgst": false, 00:10:00.612 "ddgst": false 00:10:00.612 }, 00:10:00.612 "method": "bdev_nvme_attach_controller" 00:10:00.612 }' 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:00.612 15:14:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:00.612 "params": { 00:10:00.612 "name": "Nvme1", 00:10:00.612 "trtype": "tcp", 00:10:00.612 "traddr": "10.0.0.2", 00:10:00.612 "adrfam": "ipv4", 00:10:00.612 "trsvcid": "4420", 00:10:00.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.612 "hdgst": false, 00:10:00.612 "ddgst": false 00:10:00.612 }, 00:10:00.612 "method": "bdev_nvme_attach_controller" 00:10:00.612 }' 00:10:00.612 [2024-11-06 15:14:28.047403] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:00.612 [2024-11-06 15:14:28.047407] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:00.612 [2024-11-06 15:14:28.047408] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:00.612 [2024-11-06 15:14:28.047410] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:00.612 [2024-11-06 15:14:28.047497] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 15:14:28.047497] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 15:14:28.047499] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 15:14:28.047500] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:00.612 --proc-type=auto ] 00:10:00.612 --proc-type=auto ] 00:10:00.612 --proc-type=auto ] 00:10:00.870 [2024-11-06 15:14:28.276938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.870 [2024-11-06 15:14:28.374009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:00.870 [2024-11-06 15:14:28.377023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.870 [2024-11-06 15:14:28.471556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.870 [2024-11-06 15:14:28.484316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:01.128 [2024-11-06 15:14:28.568485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.128 [2024-11-06 15:14:28.570147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.128 [2024-11-06 15:14:28.688560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:01.128 Running I/O for 1 seconds... 00:10:01.386 Running I/O for 1 seconds... 00:10:01.644 Running I/O for 1 seconds... 00:10:01.644 Running I/O for 1 seconds... 00:10:02.210 13813.00 IOPS, 53.96 MiB/s 00:10:02.210 Latency(us) 00:10:02.210 [2024-11-06T14:14:29.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.210 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:02.210 Nvme1n1 : 1.01 13867.88 54.17 0.00 0.00 9200.50 3854.14 13793.77 00:10:02.210 [2024-11-06T14:14:29.848Z] =================================================================================================================== 00:10:02.210 [2024-11-06T14:14:29.848Z] Total : 13867.88 54.17 0.00 0.00 9200.50 3854.14 13793.77 00:10:02.467 221992.00 IOPS, 867.16 MiB/s 00:10:02.467 Latency(us) 00:10:02.467 [2024-11-06T14:14:30.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.467 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:02.467 Nvme1n1 : 1.00 221628.77 865.74 0.00 0.00 574.76 263.31 1607.19 00:10:02.467 [2024-11-06T14:14:30.105Z] =================================================================================================================== 00:10:02.467 [2024-11-06T14:14:30.105Z] Total : 221628.77 865.74 0.00 0.00 574.76 263.31 1607.19 00:10:02.467 9289.00 IOPS, 36.29 MiB/s 00:10:02.467 Latency(us) 00:10:02.467 [2024-11-06T14:14:30.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.467 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:02.467 Nvme1n1 : 1.01 9352.36 36.53 0.00 0.00 13632.24 5523.75 23093.64 00:10:02.467 [2024-11-06T14:14:30.105Z] =================================================================================================================== 00:10:02.467 [2024-11-06T14:14:30.105Z] Total : 9352.36 36.53 0.00 0.00 13632.24 5523.75 23093.64 00:10:02.725 8723.00 IOPS, 34.07 MiB/s 00:10:02.725 Latency(us) 00:10:02.725 [2024-11-06T14:14:30.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.725 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:02.725 Nvme1n1 : 1.01 8789.83 34.34 0.00 0.00 14510.21 5991.86 21720.50 00:10:02.725 [2024-11-06T14:14:30.363Z] =================================================================================================================== 00:10:02.725 [2024-11-06T14:14:30.363Z] Total : 8789.83 34.34 0.00 0.00 14510.21 5991.86 21720.50 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3710809 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3710812 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3710816 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.292 rmmod nvme_tcp 00:10:03.292 rmmod nvme_fabrics 00:10:03.292 rmmod nvme_keyring 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3710604 ']' 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3710604 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3710604 ']' 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3710604 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.292 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3710604 00:10:03.551 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.551 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.551 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3710604' 00:10:03.551 killing process with pid 3710604 00:10:03.551 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3710604 00:10:03.551 15:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3710604 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.488 15:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.471 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.471 00:10:06.471 real 0m13.608s 00:10:06.471 user 0m28.873s 00:10:06.471 sys 0m6.785s 00:10:06.471 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.471 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.471 ************************************ 00:10:06.471 END TEST nvmf_bdev_io_wait 00:10:06.471 ************************************ 00:10:06.471 15:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.471 15:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.472 15:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.472 15:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.765 ************************************ 00:10:06.765 START TEST nvmf_queue_depth 00:10:06.765 ************************************ 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.765 * Looking for test storage... 00:10:06.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.765 --rc genhtml_branch_coverage=1 00:10:06.765 --rc genhtml_function_coverage=1 00:10:06.765 --rc genhtml_legend=1 00:10:06.765 --rc geninfo_all_blocks=1 00:10:06.765 --rc geninfo_unexecuted_blocks=1 00:10:06.765 00:10:06.765 ' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.765 --rc genhtml_branch_coverage=1 00:10:06.765 --rc genhtml_function_coverage=1 00:10:06.765 --rc genhtml_legend=1 00:10:06.765 --rc geninfo_all_blocks=1 00:10:06.765 --rc geninfo_unexecuted_blocks=1 00:10:06.765 00:10:06.765 ' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.765 --rc genhtml_branch_coverage=1 00:10:06.765 --rc genhtml_function_coverage=1 00:10:06.765 --rc genhtml_legend=1 00:10:06.765 --rc geninfo_all_blocks=1 00:10:06.765 --rc geninfo_unexecuted_blocks=1 00:10:06.765 00:10:06.765 ' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.765 --rc genhtml_branch_coverage=1 00:10:06.765 --rc genhtml_function_coverage=1 00:10:06.765 --rc genhtml_legend=1 00:10:06.765 --rc geninfo_all_blocks=1 00:10:06.765 --rc geninfo_unexecuted_blocks=1 00:10:06.765 00:10:06.765 ' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.765 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.766 15:14:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.334 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.334 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.335 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.335 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:10:13.335 00:10:13.335 --- 10.0.0.2 ping statistics --- 00:10:13.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.335 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:13.335 00:10:13.335 --- 10.0.0.1 ping statistics --- 00:10:13.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.335 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3714906 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3714906 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3714906 ']' 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.335 15:14:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.335 [2024-11-06 15:14:40.408827] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:13.335 [2024-11-06 15:14:40.408910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.335 [2024-11-06 15:14:40.544446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.335 [2024-11-06 15:14:40.653647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.335 [2024-11-06 15:14:40.653695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.335 [2024-11-06 15:14:40.653706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.335 [2024-11-06 15:14:40.653719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.335 [2024-11-06 15:14:40.653728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.335 [2024-11-06 15:14:40.655229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.594 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:13.594 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:10:13.594 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.594 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.594 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 [2024-11-06 15:14:41.261335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 Malloc0 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 [2024-11-06 15:14:41.388797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3715149 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3715149 /var/tmp/bdevperf.sock 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3715149 ']' 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.853 15:14:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 [2024-11-06 15:14:41.467462] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:13.853 [2024-11-06 15:14:41.467539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715149 ] 00:10:14.112 [2024-11-06 15:14:41.592724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.112 [2024-11-06 15:14:41.708180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.679 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.679 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:10:14.679 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:14.679 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.679 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.938 NVMe0n1 00:10:14.938 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.938 15:14:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:14.938 Running I/O for 10 seconds... 00:10:16.809 10240.00 IOPS, 40.00 MiB/s [2024-11-06T14:14:45.824Z] 10591.00 IOPS, 41.37 MiB/s [2024-11-06T14:14:46.760Z] 10581.67 IOPS, 41.33 MiB/s [2024-11-06T14:14:47.697Z] 10693.25 IOPS, 41.77 MiB/s [2024-11-06T14:14:48.632Z] 10649.60 IOPS, 41.60 MiB/s [2024-11-06T14:14:49.568Z] 10698.67 IOPS, 41.79 MiB/s [2024-11-06T14:14:50.503Z] 10692.86 IOPS, 41.77 MiB/s [2024-11-06T14:14:51.879Z] 10731.00 IOPS, 41.92 MiB/s [2024-11-06T14:14:52.815Z] 10715.56 IOPS, 41.86 MiB/s [2024-11-06T14:14:52.815Z] 10736.90 IOPS, 41.94 MiB/s 00:10:25.177 Latency(us) 00:10:25.177 [2024-11-06T14:14:52.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.177 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:25.177 Verification LBA range: start 0x0 length 0x4000 00:10:25.177 NVMe0n1 : 10.06 10779.90 42.11 0.00 0.00 94663.43 11234.74 60667.61 00:10:25.177 [2024-11-06T14:14:52.815Z] =================================================================================================================== 00:10:25.177 [2024-11-06T14:14:52.816Z] Total : 10779.90 42.11 0.00 0.00 94663.43 11234.74 60667.61 00:10:25.178 { 00:10:25.178 "results": [ 00:10:25.178 { 00:10:25.178 "job": "NVMe0n1", 00:10:25.178 "core_mask": "0x1", 00:10:25.178 "workload": "verify", 00:10:25.178 "status": "finished", 00:10:25.178 "verify_range": { 00:10:25.178 "start": 0, 00:10:25.178 "length": 16384 00:10:25.178 }, 00:10:25.178 "queue_depth": 1024, 00:10:25.178 "io_size": 4096, 00:10:25.178 "runtime": 10.055107, 00:10:25.178 "iops": 10779.895231348608, 00:10:25.178 "mibps": 42.1089657474555, 00:10:25.178 "io_failed": 0, 00:10:25.178 "io_timeout": 0, 00:10:25.178 "avg_latency_us": 94663.43414810215, 00:10:25.178 "min_latency_us": 11234.742857142857, 00:10:25.178 "max_latency_us": 60667.61142857143 00:10:25.178 } 00:10:25.178 ], 00:10:25.178 "core_count": 1 00:10:25.178 } 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3715149 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3715149 ']' 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3715149 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3715149 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3715149' 00:10:25.178 killing process with pid 3715149 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3715149 00:10:25.178 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.178 00:10:25.178 Latency(us) 00:10:25.178 [2024-11-06T14:14:52.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.178 [2024-11-06T14:14:52.816Z] =================================================================================================================== 00:10:25.178 [2024-11-06T14:14:52.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.178 15:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3715149 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.114 rmmod nvme_tcp 00:10:26.114 rmmod nvme_fabrics 00:10:26.114 rmmod nvme_keyring 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3714906 ']' 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3714906 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3714906 ']' 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3714906 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3714906 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3714906' 00:10:26.114 killing process with pid 3714906 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3714906 00:10:26.114 15:14:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3714906 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.491 15:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.395 00:10:29.395 real 0m22.759s 00:10:29.395 user 0m27.555s 00:10:29.395 sys 0m6.168s 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.395 ************************************ 00:10:29.395 END TEST nvmf_queue_depth 00:10:29.395 ************************************ 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.395 ************************************ 00:10:29.395 START TEST nvmf_target_multipath 00:10:29.395 ************************************ 00:10:29.395 15:14:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:29.655 * Looking for test storage... 00:10:29.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.655 --rc genhtml_branch_coverage=1 00:10:29.655 --rc genhtml_function_coverage=1 00:10:29.655 --rc genhtml_legend=1 00:10:29.655 --rc geninfo_all_blocks=1 00:10:29.655 --rc geninfo_unexecuted_blocks=1 00:10:29.655 00:10:29.655 ' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.655 --rc genhtml_branch_coverage=1 00:10:29.655 --rc genhtml_function_coverage=1 00:10:29.655 --rc genhtml_legend=1 00:10:29.655 --rc geninfo_all_blocks=1 00:10:29.655 --rc geninfo_unexecuted_blocks=1 00:10:29.655 00:10:29.655 ' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.655 --rc genhtml_branch_coverage=1 00:10:29.655 --rc genhtml_function_coverage=1 00:10:29.655 --rc genhtml_legend=1 00:10:29.655 --rc geninfo_all_blocks=1 00:10:29.655 --rc geninfo_unexecuted_blocks=1 00:10:29.655 00:10:29.655 ' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.655 --rc genhtml_branch_coverage=1 00:10:29.655 --rc genhtml_function_coverage=1 00:10:29.655 --rc genhtml_legend=1 00:10:29.655 --rc geninfo_all_blocks=1 00:10:29.655 --rc geninfo_unexecuted_blocks=1 00:10:29.655 00:10:29.655 ' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.655 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.656 15:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.223 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:36.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:36.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:36.224 Found net devices under 0000:86:00.0: cvl_0_0 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:36.224 Found net devices under 0000:86:00.1: cvl_0_1 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.224 15:15:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:10:36.224 00:10:36.224 --- 10.0.0.2 ping statistics --- 00:10:36.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.224 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:10:36.224 00:10:36.224 --- 10.0.0.1 ping statistics --- 00:10:36.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.224 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:36.224 only one NIC for nvmf test 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.224 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.225 rmmod nvme_tcp 00:10:36.225 rmmod nvme_fabrics 00:10:36.225 rmmod nvme_keyring 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.225 15:15:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.128 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.128 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:38.128 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:38.128 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.129 00:10:38.129 real 0m8.411s 00:10:38.129 user 0m1.840s 00:10:38.129 sys 0m4.549s 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 ************************************ 00:10:38.129 END TEST nvmf_target_multipath 00:10:38.129 ************************************ 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 ************************************ 00:10:38.129 START TEST nvmf_zcopy 00:10:38.129 ************************************ 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:38.129 * Looking for test storage... 00:10:38.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.129 --rc genhtml_branch_coverage=1 00:10:38.129 --rc genhtml_function_coverage=1 00:10:38.129 --rc genhtml_legend=1 00:10:38.129 --rc geninfo_all_blocks=1 00:10:38.129 --rc geninfo_unexecuted_blocks=1 00:10:38.129 00:10:38.129 ' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.129 --rc genhtml_branch_coverage=1 00:10:38.129 --rc genhtml_function_coverage=1 00:10:38.129 --rc genhtml_legend=1 00:10:38.129 --rc geninfo_all_blocks=1 00:10:38.129 --rc geninfo_unexecuted_blocks=1 00:10:38.129 00:10:38.129 ' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.129 --rc genhtml_branch_coverage=1 00:10:38.129 --rc genhtml_function_coverage=1 00:10:38.129 --rc genhtml_legend=1 00:10:38.129 --rc geninfo_all_blocks=1 00:10:38.129 --rc geninfo_unexecuted_blocks=1 00:10:38.129 00:10:38.129 ' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.129 --rc genhtml_branch_coverage=1 00:10:38.129 --rc genhtml_function_coverage=1 00:10:38.129 --rc genhtml_legend=1 00:10:38.129 --rc geninfo_all_blocks=1 00:10:38.129 --rc geninfo_unexecuted_blocks=1 00:10:38.129 00:10:38.129 ' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.129 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.130 15:15:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.697 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:44.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:44.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:44.698 Found net devices under 0000:86:00.0: cvl_0_0 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:44.698 Found net devices under 0000:86:00.1: cvl_0_1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:10:44.698 00:10:44.698 --- 10.0.0.2 ping statistics --- 00:10:44.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.698 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:10:44.698 00:10:44.698 --- 10.0.0.1 ping statistics --- 00:10:44.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.698 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3725005 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3725005 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3725005 ']' 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:44.698 15:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.698 [2024-11-06 15:15:11.736170] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:44.698 [2024-11-06 15:15:11.736275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.698 [2024-11-06 15:15:11.865893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.698 [2024-11-06 15:15:11.968629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.698 [2024-11-06 15:15:11.968668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.698 [2024-11-06 15:15:11.968679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.698 [2024-11-06 15:15:11.968690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.698 [2024-11-06 15:15:11.968698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.698 [2024-11-06 15:15:11.970330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.957 [2024-11-06 15:15:12.579832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.957 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 [2024-11-06 15:15:12.600030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 malloc0 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:45.216 { 00:10:45.216 "params": { 00:10:45.216 "name": "Nvme$subsystem", 00:10:45.216 "trtype": "$TEST_TRANSPORT", 00:10:45.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.216 "adrfam": "ipv4", 00:10:45.216 "trsvcid": "$NVMF_PORT", 00:10:45.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.216 "hdgst": ${hdgst:-false}, 00:10:45.216 "ddgst": ${ddgst:-false} 00:10:45.216 }, 00:10:45.216 "method": "bdev_nvme_attach_controller" 00:10:45.216 } 00:10:45.216 EOF 00:10:45.216 )") 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:45.216 15:15:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:45.216 "params": { 00:10:45.216 "name": "Nvme1", 00:10:45.216 "trtype": "tcp", 00:10:45.216 "traddr": "10.0.0.2", 00:10:45.216 "adrfam": "ipv4", 00:10:45.216 "trsvcid": "4420", 00:10:45.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.216 "hdgst": false, 00:10:45.216 "ddgst": false 00:10:45.216 }, 00:10:45.216 "method": "bdev_nvme_attach_controller" 00:10:45.216 }' 00:10:45.216 [2024-11-06 15:15:12.732837] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:45.216 [2024-11-06 15:15:12.732917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725102 ] 00:10:45.475 [2024-11-06 15:15:12.855871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.475 [2024-11-06 15:15:12.967396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.041 Running I/O for 10 seconds... 00:10:48.352 7456.00 IOPS, 58.25 MiB/s [2024-11-06T14:15:16.925Z] 7494.50 IOPS, 58.55 MiB/s [2024-11-06T14:15:17.860Z] 7519.33 IOPS, 58.74 MiB/s [2024-11-06T14:15:18.843Z] 7525.00 IOPS, 58.79 MiB/s [2024-11-06T14:15:19.778Z] 7525.20 IOPS, 58.79 MiB/s [2024-11-06T14:15:20.713Z] 7532.83 IOPS, 58.85 MiB/s [2024-11-06T14:15:21.648Z] 7513.86 IOPS, 58.70 MiB/s [2024-11-06T14:15:22.584Z] 7509.38 IOPS, 58.67 MiB/s [2024-11-06T14:15:23.959Z] 7504.44 IOPS, 58.63 MiB/s [2024-11-06T14:15:23.959Z] 7506.00 IOPS, 58.64 MiB/s 00:10:56.321 Latency(us) 00:10:56.321 [2024-11-06T14:15:23.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:56.321 Verification LBA range: start 0x0 length 0x1000 00:10:56.321 Nvme1n1 : 10.05 7479.16 58.43 0.00 0.00 16997.79 2995.93 44439.65 00:10:56.321 [2024-11-06T14:15:23.959Z] =================================================================================================================== 00:10:56.321 [2024-11-06T14:15:23.959Z] Total : 7479.16 58.43 0.00 0.00 16997.79 2995.93 44439.65 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3727102 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.887 { 00:10:56.887 "params": { 00:10:56.887 "name": "Nvme$subsystem", 00:10:56.887 "trtype": "$TEST_TRANSPORT", 00:10:56.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.887 "adrfam": "ipv4", 00:10:56.887 "trsvcid": "$NVMF_PORT", 00:10:56.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.887 "hdgst": ${hdgst:-false}, 00:10:56.887 "ddgst": ${ddgst:-false} 00:10:56.887 }, 00:10:56.887 "method": "bdev_nvme_attach_controller" 00:10:56.887 } 00:10:56.887 EOF 00:10:56.887 )") 00:10:56.887 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:56.888 [2024-11-06 15:15:24.493860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.888 [2024-11-06 15:15:24.493903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.888 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:56.888 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:56.888 15:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.888 "params": { 00:10:56.888 "name": "Nvme1", 00:10:56.888 "trtype": "tcp", 00:10:56.888 "traddr": "10.0.0.2", 00:10:56.888 "adrfam": "ipv4", 00:10:56.888 "trsvcid": "4420", 00:10:56.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.888 "hdgst": false, 00:10:56.888 "ddgst": false 00:10:56.888 }, 00:10:56.888 "method": "bdev_nvme_attach_controller" 00:10:56.888 }' 00:10:56.888 [2024-11-06 15:15:24.505865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.888 [2024-11-06 15:15:24.505893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.888 [2024-11-06 15:15:24.517866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.888 [2024-11-06 15:15:24.517891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.146 [2024-11-06 15:15:24.529918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.529943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.541931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.541954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.553964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.553986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.556730] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:57.147 [2024-11-06 15:15:24.556805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727102 ] 00:10:57.147 [2024-11-06 15:15:24.566000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.566023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.578040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.578064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.590057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.590078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.602107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.602127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.614118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.614142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.626169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.626190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.638191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.638217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.650221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.650241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.662268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.662288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.674303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.674322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.680452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.147 [2024-11-06 15:15:24.686316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.686335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.698397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.698422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.710392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.710412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.722429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.722449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.734479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.734502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.746478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.746499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.758523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.758544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.147 [2024-11-06 15:15:24.770552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.147 [2024-11-06 15:15:24.770572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.782581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.782603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.792019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.405 [2024-11-06 15:15:24.794621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.794642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.806657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.806678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.818704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.818726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.830720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.830744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.842757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.842779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.854790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.405 [2024-11-06 15:15:24.854811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.405 [2024-11-06 15:15:24.866818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.866838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.878848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.878871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.890909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.890934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.902914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.902935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.914953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.914973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.926987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.927006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.939017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.939037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.951062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.951082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.963099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.963118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.975106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.975126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.987173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.987193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:24.999176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:24.999196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:25.011231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:25.011252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:25.023267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:25.023291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.406 [2024-11-06 15:15:25.035277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.406 [2024-11-06 15:15:25.035298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.047328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.047353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.059360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.059384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.071371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.071393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.083424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.083444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.095433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.095453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.107486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.107506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.119514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.119535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.131543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.131568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.143592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.143616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.155620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.155642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.167650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.167673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.179693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.179716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.191989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.192014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.203757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.203779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 Running I/O for 5 seconds... 00:10:57.664 [2024-11-06 15:15:25.221244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.221272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.238284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.238310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.254684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.254709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.271096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.271122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.664 [2024-11-06 15:15:25.288304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.664 [2024-11-06 15:15:25.288331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.305315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.305342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.321819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.321846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.338171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.338197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.354979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.355006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.371705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.371732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.388391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.388419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.404543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.404569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.416990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.417016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.433838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.433863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.450174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.450200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.466582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.466608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.478493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.478520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.494948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.494974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.511398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.511424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.528167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.528194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.544273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.544299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.922 [2024-11-06 15:15:25.555979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.922 [2024-11-06 15:15:25.556005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.572222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.572247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.584298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.584330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.599453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.599478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.615862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.615887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.627748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.627772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.644142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.644166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.659995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.660019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.672702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.672728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.684491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.684517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.699653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.699678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.716049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.716075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.181 [2024-11-06 15:15:25.730119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.181 [2024-11-06 15:15:25.730145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.182 [2024-11-06 15:15:25.746096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.182 [2024-11-06 15:15:25.746121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.182 [2024-11-06 15:15:25.761997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.182 [2024-11-06 15:15:25.762022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.182 [2024-11-06 15:15:25.778239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.182 [2024-11-06 15:15:25.778264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.182 [2024-11-06 15:15:25.794285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.182 [2024-11-06 15:15:25.794310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.182 [2024-11-06 15:15:25.808610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.182 [2024-11-06 15:15:25.808635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.820033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.820060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.836669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.836695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.853259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.853284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.869819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.869844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.886078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.886103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.900181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.900213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.915354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.915379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.931435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.931461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.946498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.946522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.962838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.962862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.979114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.979141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:25.990959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:25.990984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:26.006933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:26.006958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:26.022789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:26.022814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:26.034784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:26.034810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:26.050213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:26.050238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.440 [2024-11-06 15:15:26.066187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.440 [2024-11-06 15:15:26.066223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.081406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.081431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.097922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.097947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.114261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.114286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.130191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.130224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.145014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.145039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.161709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.161734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.177591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.177621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.192294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.192320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.203553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.203579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 14274.00 IOPS, 111.52 MiB/s [2024-11-06T14:15:26.337Z] [2024-11-06 15:15:26.218678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.218703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.235484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.235510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.251559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.251584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.263386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.263412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.278710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.278735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.295414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.295438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.311873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.311898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.699 [2024-11-06 15:15:26.326434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.699 [2024-11-06 15:15:26.326458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.338157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.338183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.353749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.353773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.370088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.370115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.380892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.380918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.396194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.396229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.412542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.412568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.424342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.424369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.439422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.439448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.451482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.451512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.467907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.467933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.484413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.484439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.501218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.501244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.517352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.517377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.529324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.529348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.544803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.544835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.556881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.556906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.566981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.567006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.958 [2024-11-06 15:15:26.582830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.958 [2024-11-06 15:15:26.582855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.599209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.599237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.615575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.615604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.631956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.631982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.648753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.648779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.665105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.665131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.676782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.676807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.692294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.692320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.709139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.709164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.725398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.725423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.737078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.737108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.752683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.752708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.764209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.764234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.779674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.779700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.791619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.791645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.807937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.807970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.825154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.825178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.217 [2024-11-06 15:15:26.841757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.217 [2024-11-06 15:15:26.841782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.858500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.858525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.874669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.874694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.888328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.888353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.904744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.904769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.920648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.920672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.936676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.936701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.951395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.951419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.967068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.967092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.983517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.983541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:26.998526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:26.998551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.015122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.015147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.031639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.031664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.048110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.048134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.064591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.064615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.080225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.080250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.094833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.094858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.476 [2024-11-06 15:15:27.105586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.476 [2024-11-06 15:15:27.105611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.122301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.122326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.137629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.137654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.153361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.153386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.169498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.169523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.185278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.185304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.199795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.199821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.211642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.211668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 14335.00 IOPS, 111.99 MiB/s [2024-11-06T14:15:27.373Z] [2024-11-06 15:15:27.227355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.227379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.243887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.243912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.259708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.259733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.271721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.271747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.288090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.288115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.303880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.303905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.318594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.318619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.330539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.735 [2024-11-06 15:15:27.330564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.735 [2024-11-06 15:15:27.347242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.736 [2024-11-06 15:15:27.347267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.736 [2024-11-06 15:15:27.363162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.736 [2024-11-06 15:15:27.363187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.379887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.379913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.395620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.395646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.411499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.411524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.428112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.428138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.444041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.444067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.460766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.460791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.476874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.476899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.491896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.491921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.507829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.507855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.522540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.522572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.537988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.538014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.554647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.554671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.571185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.571217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.587934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.587959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.604618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.604648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.997 [2024-11-06 15:15:27.620357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.997 [2024-11-06 15:15:27.620382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.632727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.632755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.649007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.649033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.665333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.665359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.677682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.677706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.692877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.692902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.705016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.705040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.720966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.720991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.737190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.737222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.752961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.752986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.767422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.767447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.782721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.782747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.798928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.798954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.810694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.810721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.827515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.827541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.843706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.843731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.861288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.861314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.877256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.877280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.283 [2024-11-06 15:15:27.892965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.283 [2024-11-06 15:15:27.892996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.908485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.908511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.925248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.925274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.942092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.942120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.958288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.958314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.974768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.974794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:27.990891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:27.990918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:28.002621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:28.002646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:28.018829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:28.018855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.577 [2024-11-06 15:15:28.035007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.577 [2024-11-06 15:15:28.035032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.051084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.051110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.067553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.067579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.084029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.084056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.099831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.099858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.115249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.115276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.131464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.131489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.147459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.147486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.159144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.159171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.175037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.175062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.578 [2024-11-06 15:15:28.191178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.578 [2024-11-06 15:15:28.191215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.207326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.207352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 14384.00 IOPS, 112.38 MiB/s [2024-11-06T14:15:28.489Z] [2024-11-06 15:15:28.219525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.219550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.235332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.235357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.251309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.251334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.262867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.262893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.278696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.278720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.295468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.295492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.307619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.851 [2024-11-06 15:15:28.307643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.851 [2024-11-06 15:15:28.323735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.323760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.339851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.339876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.356221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.356246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.372832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.372858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.384772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.384797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.400377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.400402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.416764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.416790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.433433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.433458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.449775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.449802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.460875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.460902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.852 [2024-11-06 15:15:28.476963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.852 [2024-11-06 15:15:28.476989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.110 [2024-11-06 15:15:28.492990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.110 [2024-11-06 15:15:28.493015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.110 [2024-11-06 15:15:28.509441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.509473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.525716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.525741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.537647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.537671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.553976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.554002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.569606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.569631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.581246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.581272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.596899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.596924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.613113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.613140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.624846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.624872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.641313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.641337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.657504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.657529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.673463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.673488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.687985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.688012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.700655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.700680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.712607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.712633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.728410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.728435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.111 [2024-11-06 15:15:28.744390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.111 [2024-11-06 15:15:28.744416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.756321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.756347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.772691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.772716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.788912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.788937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.804561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.804585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.819185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.819217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.833395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.833420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.849638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.849662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.865736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.865761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.881989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.882014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.897502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.897527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.911898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.911924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.927609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.927634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.944061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.944087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.960108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.960134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.972462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.972488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:28.988232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:28.988256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.370 [2024-11-06 15:15:29.003804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.370 [2024-11-06 15:15:29.003829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.019700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.019725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.035313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.035338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.049961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.049986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.065169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.065194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.081433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.081458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.095853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.095879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.107198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.107232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.122752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.122778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.135500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.135525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.152057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.152083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.168396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.168421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.184457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.184484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.197907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.197934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.213432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.213458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 14407.25 IOPS, 112.56 MiB/s [2024-11-06T14:15:29.267Z] [2024-11-06 15:15:29.230299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.230325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.246640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.246665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.629 [2024-11-06 15:15:29.262960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.629 [2024-11-06 15:15:29.262985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.279302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.279328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.295904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.295930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.312090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.312116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.328112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.328142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.342584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.342612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.357881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.357907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.370498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.370524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.387061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.387088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.403899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.403926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.420289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.420314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.432210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.432252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.448496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.448523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.464255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.464281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.475990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.476024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.491279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.491305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.507660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.507686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.888 [2024-11-06 15:15:29.519843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.888 [2024-11-06 15:15:29.519869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.146 [2024-11-06 15:15:29.531370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.146 [2024-11-06 15:15:29.531396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.146 [2024-11-06 15:15:29.547077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.146 [2024-11-06 15:15:29.547103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.146 [2024-11-06 15:15:29.563174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.146 [2024-11-06 15:15:29.563199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.578064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.578088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.594095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.594120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.606351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.606381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.620813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.620838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.632608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.632634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.649212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.649238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.664306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.664330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.676356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.676381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.692647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.692673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.708664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.708690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.724892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.724917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.741108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.741134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.752346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.752372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.147 [2024-11-06 15:15:29.768606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.147 [2024-11-06 15:15:29.768633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.784230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.784256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.799410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.799436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.815646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.815672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.827472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.827497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.843712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.843736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.860103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.860128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.876125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.876151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.887849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.887879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.904568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.904593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.920336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.920360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.934672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.934697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.949712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.949738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.961320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.961345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.977525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.977550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:29.993725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:29.993751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:30.009790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:30.009818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:30.025040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:30.025067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.406 [2024-11-06 15:15:30.040723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.406 [2024-11-06 15:15:30.040753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.057631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.057658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.074122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.074148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.091644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.091670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.107375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.107401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.124372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.124400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.136905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.136931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.153606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.153631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.170547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.170573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.182999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.183029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.194444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.194468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 [2024-11-06 15:15:30.210810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.664 [2024-11-06 15:15:30.210835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.664 14390.00 IOPS, 112.42 MiB/s [2024-11-06T14:15:30.302Z] [2024-11-06 15:15:30.226326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.226351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 00:11:02.665 Latency(us) 00:11:02.665 [2024-11-06T14:15:30.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.665 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:02.665 Nvme1n1 : 5.01 14391.29 112.43 0.00 0.00 8884.75 4119.41 18474.91 00:11:02.665 [2024-11-06T14:15:30.303Z] =================================================================================================================== 00:11:02.665 [2024-11-06T14:15:30.303Z] Total : 14391.29 112.43 0.00 0.00 8884.75 4119.41 18474.91 00:11:02.665 [2024-11-06 15:15:30.235123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.235146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 [2024-11-06 15:15:30.247155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.247178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 [2024-11-06 15:15:30.259173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.259194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 [2024-11-06 15:15:30.271223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.271244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 [2024-11-06 15:15:30.283289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.283327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.665 [2024-11-06 15:15:30.295333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.665 [2024-11-06 15:15:30.295359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.307338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.307360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.319348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.319369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.331394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.331415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.343433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.343457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.355440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.355461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.367506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.367532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.379522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.379546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.391567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.391593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.403587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.403607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.415604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.415624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.427646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.427666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.439683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.439703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.451716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.451736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.463745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.923 [2024-11-06 15:15:30.463766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.923 [2024-11-06 15:15:30.475763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.475783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.487811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.487830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.499847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.499866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.511869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.511889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.523914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.523934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.535941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.535961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.924 [2024-11-06 15:15:30.547973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.924 [2024-11-06 15:15:30.547994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.560032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.560055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.572041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.572063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.584088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.584110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.596125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.596147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.608138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.608159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.620197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.620226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.632254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.632285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.644253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.644276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.656289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.656311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.668302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.668323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.680348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.680370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.692380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.692400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.704401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.704422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.716440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.716461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.728474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.728495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.740518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.740539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.182 [2024-11-06 15:15:30.752551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.182 [2024-11-06 15:15:30.752571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.183 [2024-11-06 15:15:30.764571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.183 [2024-11-06 15:15:30.764594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.183 [2024-11-06 15:15:30.776613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.183 [2024-11-06 15:15:30.776634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.183 [2024-11-06 15:15:30.788645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.183 [2024-11-06 15:15:30.788665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.183 [2024-11-06 15:15:30.800671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.183 [2024-11-06 15:15:30.800692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.183 [2024-11-06 15:15:30.812715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.183 [2024-11-06 15:15:30.812736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.824745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.824769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.836766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.836786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.848814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.848834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.860832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.860852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.872877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.872897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.884915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.884935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.896926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.896947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.908993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.909014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.921004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.921025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.933022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.933042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.945069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.945089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.957089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.957109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.969136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.969157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.981165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.981185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:30.993186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:30.993215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.005257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.005278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.017268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.017288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.029308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.029328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.041336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.041355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.053357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.053380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.065407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.065427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.442 [2024-11-06 15:15:31.077432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.442 [2024-11-06 15:15:31.077452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.701 [2024-11-06 15:15:31.089456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.701 [2024-11-06 15:15:31.089476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.701 [2024-11-06 15:15:31.101502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.701 [2024-11-06 15:15:31.101522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3727102) - No such process 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3727102 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.701 delay0 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.701 15:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:03.701 [2024-11-06 15:15:31.292179] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:10.257 Initializing NVMe Controllers 00:11:10.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:10.257 Initialization complete. Launching workers. 00:11:10.257 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 944 00:11:10.257 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1231, failed to submit 33 00:11:10.257 success 1063, unsuccessful 168, failed 0 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.257 rmmod nvme_tcp 00:11:10.257 rmmod nvme_fabrics 00:11:10.257 rmmod nvme_keyring 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3725005 ']' 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3725005 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3725005 ']' 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3725005 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3725005 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3725005' 00:11:10.257 killing process with pid 3725005 00:11:10.257 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3725005 00:11:10.258 15:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3725005 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.194 15:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.848 00:11:13.848 real 0m35.395s 00:11:13.848 user 0m48.545s 00:11:13.848 sys 0m11.057s 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 ************************************ 00:11:13.848 END TEST nvmf_zcopy 00:11:13.848 ************************************ 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.848 ************************************ 00:11:13.848 START TEST nvmf_nmic 00:11:13.848 ************************************ 00:11:13.848 15:15:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:13.848 * Looking for test storage... 00:11:13.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.848 --rc genhtml_branch_coverage=1 00:11:13.848 --rc genhtml_function_coverage=1 00:11:13.848 --rc genhtml_legend=1 00:11:13.848 --rc geninfo_all_blocks=1 00:11:13.848 --rc geninfo_unexecuted_blocks=1 00:11:13.848 00:11:13.848 ' 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.848 --rc genhtml_branch_coverage=1 00:11:13.848 --rc genhtml_function_coverage=1 00:11:13.848 --rc genhtml_legend=1 00:11:13.848 --rc geninfo_all_blocks=1 00:11:13.848 --rc geninfo_unexecuted_blocks=1 00:11:13.848 00:11:13.848 ' 00:11:13.848 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:13.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.848 --rc genhtml_branch_coverage=1 00:11:13.848 --rc genhtml_function_coverage=1 00:11:13.848 --rc genhtml_legend=1 00:11:13.848 --rc geninfo_all_blocks=1 00:11:13.848 --rc geninfo_unexecuted_blocks=1 00:11:13.848 00:11:13.848 ' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:13.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.849 --rc genhtml_branch_coverage=1 00:11:13.849 --rc genhtml_function_coverage=1 00:11:13.849 --rc genhtml_legend=1 00:11:13.849 --rc geninfo_all_blocks=1 00:11:13.849 --rc geninfo_unexecuted_blocks=1 00:11:13.849 00:11:13.849 ' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.849 15:15:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.415 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:20.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:20.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:20.416 Found net devices under 0000:86:00.0: cvl_0_0 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:20.416 Found net devices under 0000:86:00.1: cvl_0_1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.416 15:15:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:11:20.416 00:11:20.416 --- 10.0.0.2 ping statistics --- 00:11:20.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.416 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:11:20.416 00:11:20.416 --- 10.0.0.1 ping statistics --- 00:11:20.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.416 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.416 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3732940 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3732940 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3732940 ']' 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.417 15:15:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.417 [2024-11-06 15:15:47.223832] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:20.417 [2024-11-06 15:15:47.223918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.417 [2024-11-06 15:15:47.355031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.417 [2024-11-06 15:15:47.463083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.417 [2024-11-06 15:15:47.463129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.417 [2024-11-06 15:15:47.463139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.417 [2024-11-06 15:15:47.463150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.417 [2024-11-06 15:15:47.463157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.417 [2024-11-06 15:15:47.465733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.417 [2024-11-06 15:15:47.465766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.417 [2024-11-06 15:15:47.465842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.417 [2024-11-06 15:15:47.465865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.417 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.417 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:11:20.417 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.417 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.417 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 [2024-11-06 15:15:48.070237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 Malloc0 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.675 [2024-11-06 15:15:48.196984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:20.675 test case1: single bdev can't be used in multiple subsystems 00:11:20.675 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 [2024-11-06 15:15:48.224845] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:20.676 [2024-11-06 15:15:48.224878] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:20.676 [2024-11-06 15:15:48.224889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.676 request: 00:11:20.676 { 00:11:20.676 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:20.676 "namespace": { 00:11:20.676 "bdev_name": "Malloc0", 00:11:20.676 "no_auto_visible": false 00:11:20.676 }, 00:11:20.676 "method": "nvmf_subsystem_add_ns", 00:11:20.676 "req_id": 1 00:11:20.676 } 00:11:20.676 Got JSON-RPC error response 00:11:20.676 response: 00:11:20.676 { 00:11:20.676 "code": -32602, 00:11:20.676 "message": "Invalid parameters" 00:11:20.676 } 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:20.676 Adding namespace failed - expected result. 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:20.676 test case2: host connect to nvmf target in multiple paths 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 [2024-11-06 15:15:48.237011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.676 15:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.048 15:15:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:23.428 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.428 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:11:23.428 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.428 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:23.428 15:15:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:11:25.326 15:15:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:25.326 [global] 00:11:25.326 thread=1 00:11:25.326 invalidate=1 00:11:25.326 rw=write 00:11:25.326 time_based=1 00:11:25.326 runtime=1 00:11:25.326 ioengine=libaio 00:11:25.326 direct=1 00:11:25.326 bs=4096 00:11:25.326 iodepth=1 00:11:25.326 norandommap=0 00:11:25.326 numjobs=1 00:11:25.326 00:11:25.326 verify_dump=1 00:11:25.326 verify_backlog=512 00:11:25.326 verify_state_save=0 00:11:25.326 do_verify=1 00:11:25.326 verify=crc32c-intel 00:11:25.326 [job0] 00:11:25.326 filename=/dev/nvme0n1 00:11:25.326 Could not set queue depth (nvme0n1) 00:11:25.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.582 fio-3.35 00:11:25.582 Starting 1 thread 00:11:26.515 00:11:26.515 job0: (groupid=0, jobs=1): err= 0: pid=3734026: Wed Nov 6 15:15:54 2024 00:11:26.515 read: IOPS=2075, BW=8304KiB/s (8503kB/s)(8312KiB/1001msec) 00:11:26.515 slat (nsec): min=6626, max=31328, avg=8489.32, stdev=1559.09 00:11:26.515 clat (usec): min=193, max=493, avg=242.21, stdev=29.86 00:11:26.515 lat (usec): min=202, max=506, avg=250.70, stdev=30.13 00:11:26.515 clat percentiles (usec): 00:11:26.515 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:11:26.515 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 233], 00:11:26.515 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:11:26.515 | 99.00th=[ 293], 99.50th=[ 379], 99.90th=[ 486], 99.95th=[ 486], 00:11:26.515 | 99.99th=[ 494] 00:11:26.515 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:26.515 slat (nsec): min=9233, max=39548, avg=11601.61, stdev=2483.38 00:11:26.515 clat (usec): min=127, max=385, avg=170.77, stdev=31.99 00:11:26.515 lat (usec): min=139, max=417, avg=182.37, stdev=32.66 00:11:26.515 clat percentiles (usec): 00:11:26.515 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:11:26.515 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:11:26.515 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 239], 95.00th=[ 243], 00:11:26.515 | 99.00th=[ 249], 99.50th=[ 289], 99.90th=[ 367], 99.95th=[ 367], 00:11:26.515 | 99.99th=[ 388] 00:11:26.515 bw ( KiB/s): min=10088, max=10088, per=98.61%, avg=10088.00, stdev= 0.00, samples=1 00:11:26.515 iops : min= 2522, max= 2522, avg=2522.00, stdev= 0.00, samples=1 00:11:26.515 lat (usec) : 250=83.03%, 500=16.97% 00:11:26.515 cpu : usr=2.80%, sys=5.50%, ctx=4638, majf=0, minf=1 00:11:26.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.515 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.515 00:11:26.515 Run status group 0 (all jobs): 00:11:26.515 READ: bw=8304KiB/s (8503kB/s), 8304KiB/s-8304KiB/s (8503kB/s-8503kB/s), io=8312KiB (8511kB), run=1001-1001msec 00:11:26.515 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:26.515 00:11:26.515 Disk stats (read/write): 00:11:26.515 nvme0n1: ios=2085/2048, merge=0/0, ticks=504/335, in_queue=839, util=91.48% 00:11:26.515 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:27.081 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.082 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.339 rmmod nvme_tcp 00:11:27.339 rmmod nvme_fabrics 00:11:27.339 rmmod nvme_keyring 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3732940 ']' 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3732940 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3732940 ']' 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3732940 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3732940 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3732940' 00:11:27.339 killing process with pid 3732940 00:11:27.339 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3732940 00:11:27.340 15:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3732940 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.715 15:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.620 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.620 00:11:30.620 real 0m17.310s 00:11:30.620 user 0m41.126s 00:11:30.620 sys 0m5.556s 00:11:30.620 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.620 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:30.620 ************************************ 00:11:30.620 END TEST nvmf_nmic 00:11:30.620 ************************************ 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.879 ************************************ 00:11:30.879 START TEST nvmf_fio_target 00:11:30.879 ************************************ 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.879 * Looking for test storage... 00:11:30.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:30.879 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.880 --rc genhtml_branch_coverage=1 00:11:30.880 --rc genhtml_function_coverage=1 00:11:30.880 --rc genhtml_legend=1 00:11:30.880 --rc geninfo_all_blocks=1 00:11:30.880 --rc geninfo_unexecuted_blocks=1 00:11:30.880 00:11:30.880 ' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.880 --rc genhtml_branch_coverage=1 00:11:30.880 --rc genhtml_function_coverage=1 00:11:30.880 --rc genhtml_legend=1 00:11:30.880 --rc geninfo_all_blocks=1 00:11:30.880 --rc geninfo_unexecuted_blocks=1 00:11:30.880 00:11:30.880 ' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.880 --rc genhtml_branch_coverage=1 00:11:30.880 --rc genhtml_function_coverage=1 00:11:30.880 --rc genhtml_legend=1 00:11:30.880 --rc geninfo_all_blocks=1 00:11:30.880 --rc geninfo_unexecuted_blocks=1 00:11:30.880 00:11:30.880 ' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:30.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.880 --rc genhtml_branch_coverage=1 00:11:30.880 --rc genhtml_function_coverage=1 00:11:30.880 --rc genhtml_legend=1 00:11:30.880 --rc geninfo_all_blocks=1 00:11:30.880 --rc geninfo_unexecuted_blocks=1 00:11:30.880 00:11:30.880 ' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.880 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.139 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.140 15:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.705 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:37.706 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:37.706 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:37.706 Found net devices under 0000:86:00.0: cvl_0_0 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:37.706 Found net devices under 0000:86:00.1: cvl_0_1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:37.706 00:11:37.706 --- 10.0.0.2 ping statistics --- 00:11:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.706 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:11:37.706 00:11:37.706 --- 10.0.0.1 ping statistics --- 00:11:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.706 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3738132 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3738132 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3738132 ']' 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:37.706 15:16:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.706 [2024-11-06 15:16:04.531243] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:37.706 [2024-11-06 15:16:04.531341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.706 [2024-11-06 15:16:04.662838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.706 [2024-11-06 15:16:04.766728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.706 [2024-11-06 15:16:04.766771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.706 [2024-11-06 15:16:04.766782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.706 [2024-11-06 15:16:04.766791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.706 [2024-11-06 15:16:04.766799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.706 [2024-11-06 15:16:04.769307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.706 [2024-11-06 15:16:04.769388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.706 [2024-11-06 15:16:04.769458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.707 [2024-11-06 15:16:04.769481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.707 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:37.707 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:11:37.707 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.707 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.707 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.964 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.964 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:37.964 [2024-11-06 15:16:05.539915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.964 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.530 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:38.530 15:16:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.530 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:38.530 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.788 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:38.789 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.047 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:39.047 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:39.304 15:16:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.562 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:39.562 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.819 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:39.819 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:40.076 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:40.076 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:40.334 15:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.590 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:40.590 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.848 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:40.848 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.848 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.104 [2024-11-06 15:16:08.631044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.104 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:41.361 15:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:41.619 15:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:11:42.559 15:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:11:45.085 15:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:45.085 [global] 00:11:45.085 thread=1 00:11:45.085 invalidate=1 00:11:45.085 rw=write 00:11:45.085 time_based=1 00:11:45.085 runtime=1 00:11:45.085 ioengine=libaio 00:11:45.085 direct=1 00:11:45.085 bs=4096 00:11:45.085 iodepth=1 00:11:45.085 norandommap=0 00:11:45.085 numjobs=1 00:11:45.085 00:11:45.085 verify_dump=1 00:11:45.085 verify_backlog=512 00:11:45.085 verify_state_save=0 00:11:45.085 do_verify=1 00:11:45.085 verify=crc32c-intel 00:11:45.085 [job0] 00:11:45.085 filename=/dev/nvme0n1 00:11:45.085 [job1] 00:11:45.085 filename=/dev/nvme0n2 00:11:45.085 [job2] 00:11:45.085 filename=/dev/nvme0n3 00:11:45.085 [job3] 00:11:45.085 filename=/dev/nvme0n4 00:11:45.085 Could not set queue depth (nvme0n1) 00:11:45.085 Could not set queue depth (nvme0n2) 00:11:45.085 Could not set queue depth (nvme0n3) 00:11:45.085 Could not set queue depth (nvme0n4) 00:11:45.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.085 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.085 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.085 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.085 fio-3.35 00:11:45.085 Starting 4 threads 00:11:46.475 00:11:46.475 job0: (groupid=0, jobs=1): err= 0: pid=3739602: Wed Nov 6 15:16:13 2024 00:11:46.475 read: IOPS=2136, BW=8547KiB/s (8753kB/s)(8556KiB/1001msec) 00:11:46.475 slat (nsec): min=6305, max=26324, avg=7318.99, stdev=1061.35 00:11:46.475 clat (usec): min=192, max=503, avg=242.87, stdev=31.16 00:11:46.475 lat (usec): min=199, max=511, avg=250.19, stdev=31.25 00:11:46.475 clat percentiles (usec): 00:11:46.475 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:11:46.475 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:11:46.475 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 277], 00:11:46.475 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 478], 99.95th=[ 498], 00:11:46.475 | 99.99th=[ 502] 00:11:46.475 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:46.475 slat (nsec): min=9000, max=54011, avg=10173.37, stdev=1591.24 00:11:46.475 clat (usec): min=124, max=367, avg=167.68, stdev=28.17 00:11:46.475 lat (usec): min=134, max=398, avg=177.86, stdev=28.33 00:11:46.475 clat percentiles (usec): 00:11:46.475 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:11:46.475 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:11:46.475 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 202], 95.00th=[ 229], 00:11:46.475 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 326], 00:11:46.475 | 99.99th=[ 367] 00:11:46.475 bw ( KiB/s): min=10240, max=10240, per=45.42%, avg=10240.00, stdev= 0.00, samples=1 00:11:46.475 iops : min= 2560, max= 2560, avg=2560.00, stdev= 0.00, samples=1 00:11:46.475 lat (usec) : 250=85.19%, 500=14.79%, 750=0.02% 00:11:46.475 cpu : usr=1.90%, sys=4.70%, ctx=4699, majf=0, minf=1 00:11:46.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.475 issued rwts: total=2139,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.475 job1: (groupid=0, jobs=1): err= 0: pid=3739609: Wed Nov 6 15:16:13 2024 00:11:46.475 read: IOPS=118, BW=476KiB/s (487kB/s)(476KiB/1001msec) 00:11:46.475 slat (nsec): min=6729, max=22318, avg=8520.55, stdev=2831.39 00:11:46.475 clat (usec): min=199, max=42082, avg=7393.14, stdev=15502.37 00:11:46.475 lat (usec): min=206, max=42094, avg=7401.66, stdev=15504.61 00:11:46.475 clat percentiles (usec): 00:11:46.475 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 229], 00:11:46.475 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 273], 00:11:46.475 | 70.00th=[ 289], 80.00th=[ 388], 90.00th=[41157], 95.00th=[41157], 00:11:46.475 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:46.475 | 99.99th=[42206] 00:11:46.475 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:46.475 slat (nsec): min=9263, max=72358, avg=11861.66, stdev=3472.37 00:11:46.475 clat (usec): min=160, max=369, avg=218.42, stdev=46.37 00:11:46.475 lat (usec): min=171, max=390, avg=230.28, stdev=47.68 00:11:46.475 clat percentiles (usec): 00:11:46.475 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:11:46.475 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 208], 00:11:46.475 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:11:46.475 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 371], 99.95th=[ 371], 00:11:46.475 | 99.99th=[ 371] 00:11:46.475 bw ( KiB/s): min= 4096, max= 4096, per=18.17%, avg=4096.00, stdev= 0.00, samples=1 00:11:46.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:46.475 lat (usec) : 250=58.64%, 500=37.88%, 750=0.16% 00:11:46.475 lat (msec) : 50=3.33% 00:11:46.475 cpu : usr=0.30%, sys=0.60%, ctx=631, majf=0, minf=1 00:11:46.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.475 issued rwts: total=119,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.475 job2: (groupid=0, jobs=1): err= 0: pid=3739631: Wed Nov 6 15:16:13 2024 00:11:46.475 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:46.475 slat (nsec): min=6738, max=28356, avg=7387.95, stdev=804.93 00:11:46.476 clat (usec): min=213, max=1131, avg=249.49, stdev=33.32 00:11:46.476 lat (usec): min=220, max=1141, avg=256.88, stdev=33.47 00:11:46.476 clat percentiles (usec): 00:11:46.476 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:11:46.476 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:46.476 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:11:46.476 | 99.00th=[ 285], 99.50th=[ 433], 99.90th=[ 668], 99.95th=[ 947], 00:11:46.476 | 99.99th=[ 1139] 00:11:46.476 write: IOPS=2263, BW=9055KiB/s (9272kB/s)(9064KiB/1001msec); 0 zone resets 00:11:46.476 slat (usec): min=9, max=7607, avg=15.10, stdev=159.59 00:11:46.476 clat (usec): min=130, max=393, avg=189.68, stdev=39.73 00:11:46.476 lat (usec): min=141, max=7911, avg=204.78, stdev=167.11 00:11:46.476 clat percentiles (usec): 00:11:46.476 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:11:46.476 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 182], 00:11:46.476 | 70.00th=[ 198], 80.00th=[ 219], 90.00th=[ 262], 95.00th=[ 277], 00:11:46.476 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 355], 99.95th=[ 383], 00:11:46.476 | 99.99th=[ 396] 00:11:46.476 bw ( KiB/s): min= 8192, max= 8192, per=36.34%, avg=8192.00, stdev= 0.00, samples=1 00:11:46.476 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:46.476 lat (usec) : 250=72.46%, 500=27.45%, 750=0.05%, 1000=0.02% 00:11:46.476 lat (msec) : 2=0.02% 00:11:46.476 cpu : usr=2.60%, sys=4.30%, ctx=4317, majf=0, minf=1 00:11:46.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.476 issued rwts: total=2048,2266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.476 job3: (groupid=0, jobs=1): err= 0: pid=3739637: Wed Nov 6 15:16:13 2024 00:11:46.476 read: IOPS=30, BW=123KiB/s (126kB/s)(128KiB/1038msec) 00:11:46.476 slat (nsec): min=8601, max=26120, avg=19058.84, stdev=6631.80 00:11:46.476 clat (usec): min=327, max=41491, avg=28240.05, stdev=19074.55 00:11:46.476 lat (usec): min=350, max=41500, avg=28259.11, stdev=19076.23 00:11:46.476 clat percentiles (usec): 00:11:46.476 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 379], 20.00th=[ 404], 00:11:46.476 | 30.00th=[ 469], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:11:46.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:46.476 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:46.476 | 99.99th=[41681] 00:11:46.476 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:11:46.476 slat (usec): min=11, max=162, avg=15.92, stdev=10.78 00:11:46.476 clat (usec): min=165, max=376, avg=240.40, stdev=35.74 00:11:46.476 lat (usec): min=179, max=439, avg=256.32, stdev=35.85 00:11:46.476 clat percentiles (usec): 00:11:46.476 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:11:46.476 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 253], 00:11:46.476 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:11:46.476 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 375], 99.95th=[ 375], 00:11:46.476 | 99.99th=[ 375] 00:11:46.476 bw ( KiB/s): min= 4096, max= 4096, per=18.17%, avg=4096.00, stdev= 0.00, samples=1 00:11:46.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:46.476 lat (usec) : 250=55.33%, 500=40.62% 00:11:46.476 lat (msec) : 50=4.04% 00:11:46.476 cpu : usr=0.29%, sys=1.16%, ctx=547, majf=0, minf=1 00:11:46.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.476 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.476 00:11:46.476 Run status group 0 (all jobs): 00:11:46.476 READ: bw=16.3MiB/s (17.1MB/s), 123KiB/s-8547KiB/s (126kB/s-8753kB/s), io=16.9MiB (17.8MB), run=1001-1038msec 00:11:46.476 WRITE: bw=22.0MiB/s (23.1MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=22.9MiB (24.0MB), run=1001-1038msec 00:11:46.476 00:11:46.476 Disk stats (read/write): 00:11:46.476 nvme0n1: ios=1895/2048, merge=0/0, ticks=441/346, in_queue=787, util=84.97% 00:11:46.476 nvme0n2: ios=32/512, merge=0/0, ticks=700/110, in_queue=810, util=86.15% 00:11:46.476 nvme0n3: ios=1638/2048, merge=0/0, ticks=1339/370, in_queue=1709, util=96.82% 00:11:46.476 nvme0n4: ios=84/512, merge=0/0, ticks=994/120, in_queue=1114, util=96.79% 00:11:46.476 15:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:46.476 [global] 00:11:46.476 thread=1 00:11:46.476 invalidate=1 00:11:46.476 rw=randwrite 00:11:46.476 time_based=1 00:11:46.476 runtime=1 00:11:46.476 ioengine=libaio 00:11:46.476 direct=1 00:11:46.476 bs=4096 00:11:46.476 iodepth=1 00:11:46.476 norandommap=0 00:11:46.476 numjobs=1 00:11:46.476 00:11:46.476 verify_dump=1 00:11:46.476 verify_backlog=512 00:11:46.476 verify_state_save=0 00:11:46.476 do_verify=1 00:11:46.476 verify=crc32c-intel 00:11:46.476 [job0] 00:11:46.476 filename=/dev/nvme0n1 00:11:46.476 [job1] 00:11:46.476 filename=/dev/nvme0n2 00:11:46.476 [job2] 00:11:46.476 filename=/dev/nvme0n3 00:11:46.476 [job3] 00:11:46.476 filename=/dev/nvme0n4 00:11:46.476 Could not set queue depth (nvme0n1) 00:11:46.476 Could not set queue depth (nvme0n2) 00:11:46.476 Could not set queue depth (nvme0n3) 00:11:46.476 Could not set queue depth (nvme0n4) 00:11:46.734 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.734 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.734 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.734 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.734 fio-3.35 00:11:46.734 Starting 4 threads 00:11:48.105 00:11:48.105 job0: (groupid=0, jobs=1): err= 0: pid=3740075: Wed Nov 6 15:16:15 2024 00:11:48.105 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:11:48.105 slat (nsec): min=10137, max=23639, avg=21836.73, stdev=2645.95 00:11:48.105 clat (usec): min=40552, max=41032, avg=40953.80, stdev=97.26 00:11:48.105 lat (usec): min=40562, max=41055, avg=40975.64, stdev=99.65 00:11:48.105 clat percentiles (usec): 00:11:48.105 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:48.105 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:48.105 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:48.105 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:48.105 | 99.99th=[41157] 00:11:48.105 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:48.105 slat (nsec): min=10916, max=37232, avg=12215.53, stdev=1997.81 00:11:48.105 clat (usec): min=150, max=241, avg=181.38, stdev=14.89 00:11:48.105 lat (usec): min=162, max=278, avg=193.60, stdev=15.27 00:11:48.105 clat percentiles (usec): 00:11:48.105 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:11:48.105 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:11:48.105 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:11:48.105 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 241], 99.95th=[ 241], 00:11:48.105 | 99.99th=[ 241] 00:11:48.105 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.105 lat (usec) : 250=95.88% 00:11:48.105 lat (msec) : 50=4.12% 00:11:48.105 cpu : usr=0.20%, sys=1.20%, ctx=538, majf=0, minf=1 00:11:48.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.105 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.105 job1: (groupid=0, jobs=1): err= 0: pid=3740086: Wed Nov 6 15:16:15 2024 00:11:48.105 read: IOPS=35, BW=143KiB/s (146kB/s)(144KiB/1007msec) 00:11:48.105 slat (nsec): min=8461, max=24846, avg=17490.53, stdev=6867.52 00:11:48.105 clat (usec): min=246, max=41877, avg=25198.45, stdev=20177.96 00:11:48.105 lat (usec): min=269, max=41901, avg=25215.94, stdev=20173.34 00:11:48.105 clat percentiles (usec): 00:11:48.105 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:11:48.105 | 30.00th=[ 265], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:11:48.105 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:48.105 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:48.105 | 99.99th=[41681] 00:11:48.105 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:48.105 slat (nsec): min=9609, max=36093, avg=10632.76, stdev=1516.80 00:11:48.105 clat (usec): min=149, max=309, avg=179.45, stdev=18.21 00:11:48.105 lat (usec): min=160, max=345, avg=190.09, stdev=18.63 00:11:48.105 clat percentiles (usec): 00:11:48.105 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:11:48.105 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:48.105 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 210], 00:11:48.105 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 310], 99.95th=[ 310], 00:11:48.105 | 99.99th=[ 310] 00:11:48.105 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.105 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.105 lat (usec) : 250=93.07%, 500=2.92% 00:11:48.105 lat (msec) : 50=4.01% 00:11:48.105 cpu : usr=0.30%, sys=0.50%, ctx=549, majf=0, minf=1 00:11:48.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.105 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.105 job2: (groupid=0, jobs=1): err= 0: pid=3740108: Wed Nov 6 15:16:15 2024 00:11:48.105 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:11:48.105 slat (nsec): min=9629, max=33431, avg=22580.59, stdev=3705.83 00:11:48.105 clat (usec): min=40865, max=41993, avg=41277.12, stdev=468.30 00:11:48.105 lat (usec): min=40888, max=42015, avg=41299.70, stdev=468.41 00:11:48.105 clat percentiles (usec): 00:11:48.105 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:48.105 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:48.105 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:48.106 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:48.106 | 99.99th=[42206] 00:11:48.106 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:11:48.106 slat (nsec): min=9275, max=40373, avg=10278.96, stdev=1517.55 00:11:48.106 clat (usec): min=156, max=436, avg=198.22, stdev=27.93 00:11:48.106 lat (usec): min=166, max=477, avg=208.50, stdev=28.49 00:11:48.106 clat percentiles (usec): 00:11:48.106 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:11:48.106 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:11:48.106 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 243], 00:11:48.106 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 437], 99.95th=[ 437], 00:11:48.106 | 99.99th=[ 437] 00:11:48.106 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:11:48.106 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:48.106 lat (usec) : 250=91.76%, 500=4.12% 00:11:48.106 lat (msec) : 50=4.12% 00:11:48.106 cpu : usr=0.39%, sys=0.39%, ctx=534, majf=0, minf=2 00:11:48.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.106 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.106 job3: (groupid=0, jobs=1): err= 0: pid=3740116: Wed Nov 6 15:16:15 2024 00:11:48.106 read: IOPS=1969, BW=7877KiB/s (8066kB/s)(8200KiB/1041msec) 00:11:48.106 slat (nsec): min=7277, max=37719, avg=8642.27, stdev=1442.81 00:11:48.106 clat (usec): min=187, max=41988, avg=276.86, stdev=1299.36 00:11:48.106 lat (usec): min=195, max=42009, avg=285.50, stdev=1299.60 00:11:48.106 clat percentiles (usec): 00:11:48.106 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:48.106 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:11:48.106 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:11:48.106 | 99.00th=[ 289], 99.50th=[ 400], 99.90th=[ 603], 99.95th=[41681], 00:11:48.106 | 99.99th=[42206] 00:11:48.106 write: IOPS=2459, BW=9837KiB/s (10.1MB/s)(10.0MiB/1041msec); 0 zone resets 00:11:48.106 slat (nsec): min=10221, max=41311, avg=11788.41, stdev=1767.10 00:11:48.106 clat (usec): min=125, max=320, avg=160.68, stdev=21.11 00:11:48.106 lat (usec): min=137, max=359, avg=172.47, stdev=21.55 00:11:48.106 clat percentiles (usec): 00:11:48.106 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 00:11:48.106 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:11:48.106 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 202], 00:11:48.106 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 289], 99.95th=[ 310], 00:11:48.106 | 99.99th=[ 322] 00:11:48.106 bw ( KiB/s): min=10160, max=10320, per=65.06%, avg=10240.00, stdev=113.14, samples=2 00:11:48.106 iops : min= 2540, max= 2580, avg=2560.00, stdev=28.28, samples=2 00:11:48.106 lat (usec) : 250=90.07%, 500=9.80%, 750=0.09% 00:11:48.106 lat (msec) : 50=0.04% 00:11:48.106 cpu : usr=3.17%, sys=6.15%, ctx=4610, majf=0, minf=1 00:11:48.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.106 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.106 00:11:48.106 Run status group 0 (all jobs): 00:11:48.106 READ: bw=8184KiB/s (8381kB/s), 86.5KiB/s-7877KiB/s (88.6kB/s-8066kB/s), io=8520KiB (8724kB), run=1003-1041msec 00:11:48.106 WRITE: bw=15.4MiB/s (16.1MB/s), 2014KiB/s-9837KiB/s (2062kB/s-10.1MB/s), io=16.0MiB (16.8MB), run=1003-1041msec 00:11:48.106 00:11:48.106 Disk stats (read/write): 00:11:48.106 nvme0n1: ios=42/512, merge=0/0, ticks=1722/90, in_queue=1812, util=99.00% 00:11:48.106 nvme0n2: ios=60/512, merge=0/0, ticks=1730/87, in_queue=1817, util=96.64% 00:11:48.106 nvme0n3: ios=73/512, merge=0/0, ticks=726/101, in_queue=827, util=89.94% 00:11:48.106 nvme0n4: ios=1984/2048, merge=0/0, ticks=646/320, in_queue=966, util=94.81% 00:11:48.106 15:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:48.106 [global] 00:11:48.106 thread=1 00:11:48.106 invalidate=1 00:11:48.106 rw=write 00:11:48.106 time_based=1 00:11:48.106 runtime=1 00:11:48.106 ioengine=libaio 00:11:48.106 direct=1 00:11:48.106 bs=4096 00:11:48.106 iodepth=128 00:11:48.106 norandommap=0 00:11:48.106 numjobs=1 00:11:48.106 00:11:48.106 verify_dump=1 00:11:48.106 verify_backlog=512 00:11:48.106 verify_state_save=0 00:11:48.106 do_verify=1 00:11:48.106 verify=crc32c-intel 00:11:48.106 [job0] 00:11:48.106 filename=/dev/nvme0n1 00:11:48.106 [job1] 00:11:48.106 filename=/dev/nvme0n2 00:11:48.106 [job2] 00:11:48.106 filename=/dev/nvme0n3 00:11:48.106 [job3] 00:11:48.106 filename=/dev/nvme0n4 00:11:48.106 Could not set queue depth (nvme0n1) 00:11:48.106 Could not set queue depth (nvme0n2) 00:11:48.106 Could not set queue depth (nvme0n3) 00:11:48.106 Could not set queue depth (nvme0n4) 00:11:48.106 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:48.106 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:48.106 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:48.106 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:48.106 fio-3.35 00:11:48.106 Starting 4 threads 00:11:49.480 00:11:49.480 job0: (groupid=0, jobs=1): err= 0: pid=3740559: Wed Nov 6 15:16:16 2024 00:11:49.480 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1010msec) 00:11:49.480 slat (nsec): min=1028, max=37362k, avg=124956.15, stdev=1168003.51 00:11:49.480 clat (usec): min=2229, max=73355, avg=14616.54, stdev=11704.10 00:11:49.480 lat (usec): min=2233, max=73357, avg=14741.49, stdev=11788.88 00:11:49.480 clat percentiles (usec): 00:11:49.480 | 1.00th=[ 3884], 5.00th=[ 6194], 10.00th=[ 7504], 20.00th=[ 8586], 00:11:49.480 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:11:49.480 | 70.00th=[12649], 80.00th=[14615], 90.00th=[21890], 95.00th=[47449], 00:11:49.480 | 99.00th=[58459], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:11:49.480 | 99.99th=[72877] 00:11:49.480 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:11:49.480 slat (nsec): min=1839, max=21074k, avg=92012.38, stdev=672017.67 00:11:49.480 clat (usec): min=789, max=61706, avg=13801.89, stdev=9762.46 00:11:49.480 lat (usec): min=813, max=61714, avg=13893.90, stdev=9810.42 00:11:49.480 clat percentiles (usec): 00:11:49.480 | 1.00th=[ 2180], 5.00th=[ 4621], 10.00th=[ 6390], 20.00th=[ 8717], 00:11:49.480 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:11:49.480 | 70.00th=[12649], 80.00th=[16319], 90.00th=[23987], 95.00th=[31589], 00:11:49.480 | 99.00th=[58459], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:11:49.480 | 99.99th=[61604] 00:11:49.480 bw ( KiB/s): min=16384, max=20439, per=27.29%, avg=18411.50, stdev=2867.32, samples=2 00:11:49.480 iops : min= 4096, max= 5109, avg=4602.50, stdev=716.30, samples=2 00:11:49.480 lat (usec) : 1000=0.02% 00:11:49.480 lat (msec) : 2=0.36%, 4=2.62%, 10=29.76%, 20=53.16%, 50=10.61% 00:11:49.480 lat (msec) : 100=3.47% 00:11:49.480 cpu : usr=3.17%, sys=3.57%, ctx=382, majf=0, minf=1 00:11:49.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:49.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.480 issued rwts: total=4249,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.480 job1: (groupid=0, jobs=1): err= 0: pid=3740560: Wed Nov 6 15:16:16 2024 00:11:49.480 read: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1011msec) 00:11:49.480 slat (nsec): min=1203, max=32207k, avg=163941.26, stdev=1393368.58 00:11:49.480 clat (usec): min=2020, max=83629, avg=20357.11, stdev=13140.42 00:11:49.480 lat (usec): min=2029, max=83652, avg=20521.05, stdev=13246.74 00:11:49.480 clat percentiles (usec): 00:11:49.480 | 1.00th=[ 4883], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:11:49.480 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13829], 60.00th=[17433], 00:11:49.480 | 70.00th=[24773], 80.00th=[27395], 90.00th=[35914], 95.00th=[50594], 00:11:49.480 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[82314], 00:11:49.480 | 99.99th=[83362] 00:11:49.480 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:11:49.480 slat (nsec): min=1949, max=24265k, avg=127161.09, stdev=929529.46 00:11:49.480 clat (usec): min=2422, max=64153, avg=17503.61, stdev=11203.75 00:11:49.480 lat (usec): min=2431, max=64160, avg=17630.77, stdev=11283.35 00:11:49.480 clat percentiles (usec): 00:11:49.480 | 1.00th=[ 3687], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11076], 00:11:49.481 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[13435], 00:11:49.481 | 70.00th=[20317], 80.00th=[22414], 90.00th=[30540], 95.00th=[46400], 00:11:49.481 | 99.00th=[57934], 99.50th=[58459], 99.90th=[64226], 99.95th=[64226], 00:11:49.481 | 99.99th=[64226] 00:11:49.481 bw ( KiB/s): min=12648, max=16016, per=21.24%, avg=14332.00, stdev=2381.54, samples=2 00:11:49.481 iops : min= 3162, max= 4004, avg=3583.00, stdev=595.38, samples=2 00:11:49.481 lat (msec) : 4=0.97%, 10=5.31%, 20=60.77%, 50=28.54%, 100=4.41% 00:11:49.481 cpu : usr=1.68%, sys=4.16%, ctx=258, majf=0, minf=1 00:11:49.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:49.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.481 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.481 job2: (groupid=0, jobs=1): err= 0: pid=3740561: Wed Nov 6 15:16:16 2024 00:11:49.481 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:11:49.481 slat (nsec): min=1086, max=31299k, avg=114608.74, stdev=879985.39 00:11:49.481 clat (usec): min=2052, max=59891, avg=15802.19, stdev=8091.48 00:11:49.481 lat (usec): min=2118, max=59897, avg=15916.80, stdev=8117.64 00:11:49.481 clat percentiles (usec): 00:11:49.481 | 1.00th=[ 2245], 5.00th=[ 6259], 10.00th=[ 9503], 20.00th=[11469], 00:11:49.481 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[14746], 00:11:49.481 | 70.00th=[15533], 80.00th=[19006], 90.00th=[24249], 95.00th=[32637], 00:11:49.481 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:11:49.481 | 99.99th=[60031] 00:11:49.481 write: IOPS=4718, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1010msec); 0 zone resets 00:11:49.481 slat (nsec): min=1855, max=11355k, avg=75952.26, stdev=541677.53 00:11:49.481 clat (usec): min=505, max=23190, avg=11625.31, stdev=2992.65 00:11:49.481 lat (usec): min=620, max=23195, avg=11701.26, stdev=3020.08 00:11:49.481 clat percentiles (usec): 00:11:49.481 | 1.00th=[ 2999], 5.00th=[ 5342], 10.00th=[ 7832], 20.00th=[ 9765], 00:11:49.481 | 30.00th=[10683], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:11:49.481 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14484], 95.00th=[16450], 00:11:49.481 | 99.00th=[18482], 99.50th=[19530], 99.90th=[22414], 99.95th=[22676], 00:11:49.481 | 99.99th=[23200] 00:11:49.481 bw ( KiB/s): min=17536, max=19576, per=27.50%, avg=18556.00, stdev=1442.50, samples=2 00:11:49.481 iops : min= 4384, max= 4894, avg=4639.00, stdev=360.62, samples=2 00:11:49.481 lat (usec) : 750=0.01% 00:11:49.481 lat (msec) : 2=0.19%, 4=2.89%, 10=13.53%, 20=74.54%, 50=8.18% 00:11:49.481 lat (msec) : 100=0.66% 00:11:49.481 cpu : usr=2.58%, sys=4.46%, ctx=431, majf=0, minf=1 00:11:49.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:49.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.481 issued rwts: total=4608,4766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.481 job3: (groupid=0, jobs=1): err= 0: pid=3740562: Wed Nov 6 15:16:16 2024 00:11:49.481 read: IOPS=3785, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1010msec) 00:11:49.481 slat (nsec): min=1518, max=19290k, avg=137493.67, stdev=1023330.42 00:11:49.481 clat (usec): min=3314, max=75538, avg=17112.49, stdev=10747.11 00:11:49.481 lat (usec): min=4833, max=75549, avg=17249.99, stdev=10827.59 00:11:49.481 clat percentiles (usec): 00:11:49.481 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11207], 20.00th=[11600], 00:11:49.481 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13173], 60.00th=[14222], 00:11:49.481 | 70.00th=[16188], 80.00th=[21365], 90.00th=[27132], 95.00th=[30802], 00:11:49.481 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:11:49.481 | 99.99th=[76022] 00:11:49.481 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:11:49.481 slat (usec): min=2, max=17045, avg=105.12, stdev=752.36 00:11:49.481 clat (usec): min=351, max=75502, avg=15295.15, stdev=9073.86 00:11:49.481 lat (usec): min=902, max=75507, avg=15400.26, stdev=9137.07 00:11:49.481 clat percentiles (usec): 00:11:49.481 | 1.00th=[ 1713], 5.00th=[ 5145], 10.00th=[ 8455], 20.00th=[10945], 00:11:49.481 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13435], 60.00th=[14222], 00:11:49.481 | 70.00th=[14615], 80.00th=[17957], 90.00th=[22414], 95.00th=[31065], 00:11:49.481 | 99.00th=[56886], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:11:49.481 | 99.99th=[76022] 00:11:49.481 bw ( KiB/s): min=16351, max=16384, per=24.26%, avg=16367.50, stdev=23.33, samples=2 00:11:49.481 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:11:49.481 lat (usec) : 500=0.01%, 1000=0.19% 00:11:49.481 lat (msec) : 2=0.32%, 4=0.96%, 10=8.30%, 20=72.16%, 50=15.47% 00:11:49.481 lat (msec) : 100=2.60% 00:11:49.481 cpu : usr=2.97%, sys=5.95%, ctx=368, majf=0, minf=1 00:11:49.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:49.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.481 issued rwts: total=3823,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.481 00:11:49.481 Run status group 0 (all jobs): 00:11:49.481 READ: bw=61.4MiB/s (64.3MB/s), 12.4MiB/s-17.8MiB/s (13.0MB/s-18.7MB/s), io=62.0MiB (65.0MB), run=1010-1011msec 00:11:49.481 WRITE: bw=65.9MiB/s (69.1MB/s), 13.8MiB/s-18.4MiB/s (14.5MB/s-19.3MB/s), io=66.6MiB (69.9MB), run=1010-1011msec 00:11:49.481 00:11:49.481 Disk stats (read/write): 00:11:49.481 nvme0n1: ios=3909/4096, merge=0/0, ticks=39109/44082, in_queue=83191, util=87.06% 00:11:49.481 nvme0n2: ios=2610/2959, merge=0/0, ticks=39586/38515, in_queue=78101, util=91.57% 00:11:49.481 nvme0n3: ios=3891/4096, merge=0/0, ticks=38484/30527, in_queue=69011, util=97.92% 00:11:49.481 nvme0n4: ios=3108/3520, merge=0/0, ticks=52621/52542, in_queue=105163, util=98.95% 00:11:49.481 15:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:49.481 [global] 00:11:49.481 thread=1 00:11:49.481 invalidate=1 00:11:49.481 rw=randwrite 00:11:49.481 time_based=1 00:11:49.481 runtime=1 00:11:49.481 ioengine=libaio 00:11:49.481 direct=1 00:11:49.481 bs=4096 00:11:49.481 iodepth=128 00:11:49.481 norandommap=0 00:11:49.481 numjobs=1 00:11:49.481 00:11:49.481 verify_dump=1 00:11:49.481 verify_backlog=512 00:11:49.481 verify_state_save=0 00:11:49.481 do_verify=1 00:11:49.481 verify=crc32c-intel 00:11:49.481 [job0] 00:11:49.481 filename=/dev/nvme0n1 00:11:49.481 [job1] 00:11:49.481 filename=/dev/nvme0n2 00:11:49.481 [job2] 00:11:49.481 filename=/dev/nvme0n3 00:11:49.481 [job3] 00:11:49.481 filename=/dev/nvme0n4 00:11:49.481 Could not set queue depth (nvme0n1) 00:11:49.481 Could not set queue depth (nvme0n2) 00:11:49.481 Could not set queue depth (nvme0n3) 00:11:49.481 Could not set queue depth (nvme0n4) 00:11:49.739 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.739 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.739 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.739 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.739 fio-3.35 00:11:49.739 Starting 4 threads 00:11:51.113 00:11:51.113 job0: (groupid=0, jobs=1): err= 0: pid=3740928: Wed Nov 6 15:16:18 2024 00:11:51.113 read: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec) 00:11:51.113 slat (nsec): min=1229, max=16762k, avg=115384.01, stdev=873648.76 00:11:51.113 clat (usec): min=2873, max=49449, avg=14924.56, stdev=6432.58 00:11:51.113 lat (usec): min=4386, max=60142, avg=15039.94, stdev=6504.07 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[10290], 20.00th=[10552], 00:11:51.113 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12911], 60.00th=[15008], 00:11:51.113 | 70.00th=[15926], 80.00th=[17433], 90.00th=[23200], 95.00th=[30278], 00:11:51.113 | 99.00th=[39060], 99.50th=[39060], 99.90th=[48497], 99.95th=[48497], 00:11:51.113 | 99.99th=[49546] 00:11:51.113 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:11:51.113 slat (nsec): min=1925, max=17248k, avg=129498.49, stdev=841494.36 00:11:51.113 clat (usec): min=1963, max=102876, avg=17230.17, stdev=14773.91 00:11:51.113 lat (usec): min=1992, max=102888, avg=17359.67, stdev=14870.38 00:11:51.113 clat percentiles (msec): 00:11:51.113 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:11:51.113 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:11:51.113 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 36], 00:11:51.113 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:11:51.113 | 99.99th=[ 104] 00:11:51.113 bw ( KiB/s): min=12288, max=20480, per=23.83%, avg=16384.00, stdev=5792.62, samples=2 00:11:51.113 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:51.113 lat (msec) : 2=0.03%, 4=0.36%, 10=12.53%, 20=69.21%, 50=15.95% 00:11:51.113 lat (msec) : 100=1.84%, 250=0.09% 00:11:51.113 cpu : usr=2.99%, sys=4.98%, ctx=371, majf=0, minf=1 00:11:51.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:51.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.113 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.113 job1: (groupid=0, jobs=1): err= 0: pid=3740929: Wed Nov 6 15:16:18 2024 00:11:51.113 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:11:51.113 slat (nsec): min=1275, max=13441k, avg=120774.08, stdev=850278.44 00:11:51.113 clat (usec): min=3990, max=69747, avg=13843.85, stdev=8464.11 00:11:51.113 lat (usec): min=3996, max=69755, avg=13964.63, stdev=8554.00 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 5080], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10290], 00:11:51.113 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:11:51.113 | 70.00th=[13566], 80.00th=[14353], 90.00th=[17695], 95.00th=[23725], 00:11:51.113 | 99.00th=[58459], 99.50th=[60031], 99.90th=[69731], 99.95th=[69731], 00:11:51.113 | 99.99th=[69731] 00:11:51.113 write: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(17.8MiB/1013msec); 0 zone resets 00:11:51.113 slat (usec): min=2, max=10877, avg=105.59, stdev=589.73 00:11:51.113 clat (usec): min=1477, max=69737, avg=15768.37, stdev=10584.34 00:11:51.113 lat (usec): min=1488, max=69745, avg=15873.96, stdev=10645.91 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 3654], 5.00th=[ 6587], 10.00th=[ 8717], 20.00th=[ 9765], 00:11:51.113 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:11:51.113 | 70.00th=[15270], 80.00th=[22938], 90.00th=[30802], 95.00th=[35390], 00:11:51.113 | 99.00th=[57934], 99.50th=[60031], 99.90th=[60556], 99.95th=[61604], 00:11:51.113 | 99.99th=[69731] 00:11:51.113 bw ( KiB/s): min=15552, max=19824, per=25.73%, avg=17688.00, stdev=3020.76, samples=2 00:11:51.113 iops : min= 3888, max= 4956, avg=4422.00, stdev=755.19, samples=2 00:11:51.113 lat (msec) : 2=0.02%, 4=0.83%, 10=17.96%, 20=63.83%, 50=14.78% 00:11:51.113 lat (msec) : 100=2.57% 00:11:51.113 cpu : usr=3.95%, sys=4.84%, ctx=440, majf=0, minf=1 00:11:51.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:51.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.113 issued rwts: total=4096,4550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.113 job2: (groupid=0, jobs=1): err= 0: pid=3740932: Wed Nov 6 15:16:18 2024 00:11:51.113 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:11:51.113 slat (nsec): min=1073, max=15391k, avg=143899.57, stdev=1002449.02 00:11:51.113 clat (usec): min=4356, max=46680, avg=17569.39, stdev=7655.87 00:11:51.113 lat (usec): min=4361, max=46705, avg=17713.29, stdev=7727.23 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 6587], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[12125], 00:11:51.113 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13960], 60.00th=[16450], 00:11:51.113 | 70.00th=[20055], 80.00th=[25297], 90.00th=[30278], 95.00th=[32637], 00:11:51.113 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[42206], 00:11:51.113 | 99.99th=[46924] 00:11:51.113 write: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1011msec); 0 zone resets 00:11:51.113 slat (nsec): min=1884, max=26102k, avg=117151.69, stdev=814681.83 00:11:51.113 clat (usec): min=2910, max=63807, avg=16978.03, stdev=8753.14 00:11:51.113 lat (usec): min=2919, max=63835, avg=17095.18, stdev=8822.54 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 4490], 5.00th=[ 7701], 10.00th=[10290], 20.00th=[11731], 00:11:51.113 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[16319], 00:11:51.113 | 70.00th=[17695], 80.00th=[23200], 90.00th=[31327], 95.00th=[37487], 00:11:51.113 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47973], 99.95th=[52691], 00:11:51.113 | 99.99th=[63701] 00:11:51.113 bw ( KiB/s): min=12008, max=17744, per=21.64%, avg=14876.00, stdev=4055.96, samples=2 00:11:51.113 iops : min= 3002, max= 4436, avg=3719.00, stdev=1013.99, samples=2 00:11:51.113 lat (msec) : 4=0.27%, 10=7.73%, 20=64.59%, 50=27.38%, 100=0.04% 00:11:51.113 cpu : usr=2.57%, sys=3.66%, ctx=385, majf=0, minf=1 00:11:51.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:51.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.113 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.113 job3: (groupid=0, jobs=1): err= 0: pid=3740933: Wed Nov 6 15:16:18 2024 00:11:51.113 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:51.113 slat (nsec): min=1090, max=9040.7k, avg=99716.96, stdev=588442.11 00:11:51.113 clat (usec): min=5200, max=25529, avg=12841.47, stdev=2400.88 00:11:51.113 lat (usec): min=5209, max=25555, avg=12941.19, stdev=2443.65 00:11:51.113 clat percentiles (usec): 00:11:51.113 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11600], 00:11:51.113 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:11:51.113 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15926], 95.00th=[17695], 00:11:51.113 | 99.00th=[21103], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:11:51.113 | 99.99th=[25560] 00:11:51.113 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1002msec); 0 zone resets 00:11:51.113 slat (nsec): min=1918, max=13171k, avg=104534.58, stdev=609114.65 00:11:51.113 clat (usec): min=486, max=32891, avg=13687.34, stdev=4073.03 00:11:51.113 lat (usec): min=695, max=32895, avg=13791.88, stdev=4116.10 00:11:51.114 clat percentiles (usec): 00:11:51.114 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11600], 00:11:51.114 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:11:51.114 | 70.00th=[14353], 80.00th=[16909], 90.00th=[19006], 95.00th=[20055], 00:11:51.114 | 99.00th=[29754], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:11:51.114 | 99.99th=[32900] 00:11:51.114 bw ( KiB/s): min=20480, max=20480, per=29.79%, avg=20480.00, stdev= 0.00, samples=1 00:11:51.114 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:51.114 lat (usec) : 500=0.01%, 750=0.01% 00:11:51.114 lat (msec) : 10=9.08%, 20=87.51%, 50=3.39% 00:11:51.114 cpu : usr=4.80%, sys=3.90%, ctx=502, majf=0, minf=1 00:11:51.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:51.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:51.114 issued rwts: total=4608,4918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:51.114 00:11:51.114 Run status group 0 (all jobs): 00:11:51.114 READ: bw=61.7MiB/s (64.7MB/s), 13.8MiB/s-18.0MiB/s (14.5MB/s-18.8MB/s), io=62.5MiB (65.5MB), run=1002-1013msec 00:11:51.114 WRITE: bw=67.1MiB/s (70.4MB/s), 14.9MiB/s-19.2MiB/s (15.6MB/s-20.1MB/s), io=68.0MiB (71.3MB), run=1002-1013msec 00:11:51.114 00:11:51.114 Disk stats (read/write): 00:11:51.114 nvme0n1: ios=3605/3647, merge=0/0, ticks=40575/40094, in_queue=80669, util=98.90% 00:11:51.114 nvme0n2: ios=3589/3655, merge=0/0, ticks=48643/56301, in_queue=104944, util=87.31% 00:11:51.114 nvme0n3: ios=2685/3072, merge=0/0, ticks=36187/41035, in_queue=77222, util=89.07% 00:11:51.114 nvme0n4: ios=3823/4096, merge=0/0, ticks=23498/25024, in_queue=48522, util=88.05% 00:11:51.114 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:51.114 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3741165 00:11:51.114 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:51.114 15:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:51.114 [global] 00:11:51.114 thread=1 00:11:51.114 invalidate=1 00:11:51.114 rw=read 00:11:51.114 time_based=1 00:11:51.114 runtime=10 00:11:51.114 ioengine=libaio 00:11:51.114 direct=1 00:11:51.114 bs=4096 00:11:51.114 iodepth=1 00:11:51.114 norandommap=1 00:11:51.114 numjobs=1 00:11:51.114 00:11:51.114 [job0] 00:11:51.114 filename=/dev/nvme0n1 00:11:51.114 [job1] 00:11:51.114 filename=/dev/nvme0n2 00:11:51.114 [job2] 00:11:51.114 filename=/dev/nvme0n3 00:11:51.114 [job3] 00:11:51.114 filename=/dev/nvme0n4 00:11:51.114 Could not set queue depth (nvme0n1) 00:11:51.114 Could not set queue depth (nvme0n2) 00:11:51.114 Could not set queue depth (nvme0n3) 00:11:51.114 Could not set queue depth (nvme0n4) 00:11:51.371 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.371 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.371 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.371 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.371 fio-3.35 00:11:51.371 Starting 4 threads 00:11:54.652 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:54.652 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:54.652 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16048128, buflen=4096 00:11:54.652 fio: pid=3741311, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.652 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41160704, buflen=4096 00:11:54.652 fio: pid=3741310, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.652 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.652 15:16:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:54.652 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12939264, buflen=4096 00:11:54.652 fio: pid=3741308, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.652 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.652 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:54.911 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52854784, buflen=4096 00:11:54.911 fio: pid=3741309, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:54.911 00:11:54.911 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3741308: Wed Nov 6 15:16:22 2024 00:11:54.911 read: IOPS=1020, BW=4080KiB/s (4178kB/s)(12.3MiB/3097msec) 00:11:54.911 slat (usec): min=6, max=7635, avg= 9.96, stdev=135.72 00:11:54.911 clat (usec): min=205, max=42063, avg=962.46, stdev=5109.28 00:11:54.911 lat (usec): min=213, max=48806, avg=972.42, stdev=5131.66 00:11:54.911 clat percentiles (usec): 00:11:54.911 | 1.00th=[ 237], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:11:54.911 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:11:54.911 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 494], 00:11:54.911 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:54.911 | 99.99th=[42206] 00:11:54.911 bw ( KiB/s): min= 96, max=13016, per=11.70%, avg=4208.50, stdev=5421.77, samples=6 00:11:54.911 iops : min= 24, max= 3254, avg=1052.00, stdev=1355.56, samples=6 00:11:54.911 lat (usec) : 250=1.23%, 500=94.40%, 750=2.66%, 1000=0.03% 00:11:54.911 lat (msec) : 2=0.06%, 50=1.58% 00:11:54.911 cpu : usr=0.32%, sys=0.90%, ctx=3161, majf=0, minf=1 00:11:54.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 issued rwts: total=3160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.911 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3741309: Wed Nov 6 15:16:22 2024 00:11:54.911 read: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(50.4MiB/3339msec) 00:11:54.911 slat (usec): min=6, max=14662, avg=11.38, stdev=198.00 00:11:54.911 clat (usec): min=165, max=66452, avg=244.07, stdev=879.86 00:11:54.911 lat (usec): min=173, max=66459, avg=255.45, stdev=902.53 00:11:54.911 clat percentiles (usec): 00:11:54.911 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:11:54.911 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:11:54.911 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 273], 00:11:54.911 | 99.00th=[ 330], 99.50th=[ 424], 99.90th=[ 603], 99.95th=[ 4293], 00:11:54.911 | 99.99th=[41681] 00:11:54.911 bw ( KiB/s): min=10760, max=17568, per=43.77%, avg=15746.83, stdev=2509.61, samples=6 00:11:54.911 iops : min= 2690, max= 4392, avg=3936.67, stdev=627.39, samples=6 00:11:54.911 lat (usec) : 250=85.01%, 500=14.84%, 750=0.06%, 1000=0.02% 00:11:54.911 lat (msec) : 4=0.01%, 10=0.02%, 50=0.03%, 100=0.01% 00:11:54.911 cpu : usr=1.44%, sys=3.83%, ctx=12911, majf=0, minf=2 00:11:54.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 issued rwts: total=12905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.911 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3741310: Wed Nov 6 15:16:22 2024 00:11:54.911 read: IOPS=3498, BW=13.7MiB/s (14.3MB/s)(39.3MiB/2873msec) 00:11:54.911 slat (usec): min=6, max=11173, avg= 9.61, stdev=135.07 00:11:54.911 clat (usec): min=175, max=41542, avg=272.83, stdev=818.75 00:11:54.911 lat (usec): min=183, max=41550, avg=282.45, stdev=830.00 00:11:54.911 clat percentiles (usec): 00:11:54.911 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:54.911 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 249], 00:11:54.911 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 330], 00:11:54.911 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 676], 99.95th=[ 971], 00:11:54.911 | 99.99th=[41157] 00:11:54.911 bw ( KiB/s): min=11384, max=15448, per=38.40%, avg=13814.40, stdev=1650.34, samples=5 00:11:54.911 iops : min= 2846, max= 3862, avg=3453.60, stdev=412.59, samples=5 00:11:54.911 lat (usec) : 250=60.69%, 500=38.69%, 750=0.54%, 1000=0.03% 00:11:54.911 lat (msec) : 4=0.01%, 50=0.04% 00:11:54.911 cpu : usr=1.18%, sys=3.17%, ctx=10053, majf=0, minf=2 00:11:54.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.911 issued rwts: total=10050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.911 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3741311: Wed Nov 6 15:16:22 2024 00:11:54.911 read: IOPS=1458, BW=5833KiB/s (5973kB/s)(15.3MiB/2687msec) 00:11:54.911 slat (nsec): min=6679, max=36832, avg=8635.75, stdev=1806.39 00:11:54.911 clat (usec): min=212, max=42041, avg=670.20, stdev=4066.43 00:11:54.911 lat (usec): min=220, max=42062, avg=678.84, stdev=4067.29 00:11:54.911 clat percentiles (usec): 00:11:54.912 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:11:54.912 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:11:54.912 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:11:54.912 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:54.912 | 99.99th=[42206] 00:11:54.912 bw ( KiB/s): min= 96, max=14736, per=17.41%, avg=6262.40, stdev=6245.37, samples=5 00:11:54.912 iops : min= 24, max= 3684, avg=1565.60, stdev=1561.34, samples=5 00:11:54.912 lat (usec) : 250=23.30%, 500=75.63%, 750=0.05% 00:11:54.912 lat (msec) : 50=1.00% 00:11:54.912 cpu : usr=0.37%, sys=1.56%, ctx=3920, majf=0, minf=2 00:11:54.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.912 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.912 issued rwts: total=3919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.912 00:11:54.912 Run status group 0 (all jobs): 00:11:54.912 READ: bw=35.1MiB/s (36.8MB/s), 4080KiB/s-15.1MiB/s (4178kB/s-15.8MB/s), io=117MiB (123MB), run=2687-3339msec 00:11:54.912 00:11:54.912 Disk stats (read/write): 00:11:54.912 nvme0n1: ios=3158/0, merge=0/0, ticks=2991/0, in_queue=2991, util=94.30% 00:11:54.912 nvme0n2: ios=12900/0, merge=0/0, ticks=3054/0, in_queue=3054, util=94.21% 00:11:54.912 nvme0n3: ios=9928/0, merge=0/0, ticks=2957/0, in_queue=2957, util=98.84% 00:11:54.912 nvme0n4: ios=3950/0, merge=0/0, ticks=2983/0, in_queue=2983, util=99.85% 00:11:54.912 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.912 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:55.169 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.169 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:55.426 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.426 15:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:55.683 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.683 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:55.941 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.941 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:56.198 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:56.198 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3741165 00:11:56.198 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:56.198 15:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:57.223 nvmf hotplug test: fio failed as expected 00:11:57.223 15:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.497 rmmod nvme_tcp 00:11:57.497 rmmod nvme_fabrics 00:11:57.497 rmmod nvme_keyring 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3738132 ']' 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3738132 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3738132 ']' 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3738132 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.497 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3738132 00:11:57.755 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.755 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.755 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3738132' 00:11:57.755 killing process with pid 3738132 00:11:57.755 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3738132 00:11:57.755 15:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3738132 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.690 15:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.225 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.225 00:12:01.225 real 0m30.070s 00:12:01.225 user 1m57.860s 00:12:01.225 sys 0m9.190s 00:12:01.225 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.225 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.225 ************************************ 00:12:01.226 END TEST nvmf_fio_target 00:12:01.226 ************************************ 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.226 ************************************ 00:12:01.226 START TEST nvmf_bdevio 00:12:01.226 ************************************ 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:01.226 * Looking for test storage... 00:12:01.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:01.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.226 --rc genhtml_branch_coverage=1 00:12:01.226 --rc genhtml_function_coverage=1 00:12:01.226 --rc genhtml_legend=1 00:12:01.226 --rc geninfo_all_blocks=1 00:12:01.226 --rc geninfo_unexecuted_blocks=1 00:12:01.226 00:12:01.226 ' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:01.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.226 --rc genhtml_branch_coverage=1 00:12:01.226 --rc genhtml_function_coverage=1 00:12:01.226 --rc genhtml_legend=1 00:12:01.226 --rc geninfo_all_blocks=1 00:12:01.226 --rc geninfo_unexecuted_blocks=1 00:12:01.226 00:12:01.226 ' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:01.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.226 --rc genhtml_branch_coverage=1 00:12:01.226 --rc genhtml_function_coverage=1 00:12:01.226 --rc genhtml_legend=1 00:12:01.226 --rc geninfo_all_blocks=1 00:12:01.226 --rc geninfo_unexecuted_blocks=1 00:12:01.226 00:12:01.226 ' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:01.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.226 --rc genhtml_branch_coverage=1 00:12:01.226 --rc genhtml_function_coverage=1 00:12:01.226 --rc genhtml_legend=1 00:12:01.226 --rc geninfo_all_blocks=1 00:12:01.226 --rc geninfo_unexecuted_blocks=1 00:12:01.226 00:12:01.226 ' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.226 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.227 15:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.794 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.794 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.794 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.794 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.795 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:12:07.795 00:12:07.795 --- 10.0.0.2 ping statistics --- 00:12:07.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.795 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:12:07.795 00:12:07.795 --- 10.0.0.1 ping statistics --- 00:12:07.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.795 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3746018 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3746018 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3746018 ']' 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.795 15:16:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 [2024-11-06 15:16:34.755116] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:07.795 [2024-11-06 15:16:34.755208] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.795 [2024-11-06 15:16:34.884868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.795 [2024-11-06 15:16:34.992760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.795 [2024-11-06 15:16:34.992803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.795 [2024-11-06 15:16:34.992813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.795 [2024-11-06 15:16:34.992823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.795 [2024-11-06 15:16:34.992831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.795 [2024-11-06 15:16:34.995307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:07.795 [2024-11-06 15:16:34.995400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:07.795 [2024-11-06 15:16:34.995478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.795 [2024-11-06 15:16:34.995500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.053 [2024-11-06 15:16:35.609953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.053 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 Malloc0 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:08.310 [2024-11-06 15:16:35.745780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:08.310 { 00:12:08.310 "params": { 00:12:08.310 "name": "Nvme$subsystem", 00:12:08.310 "trtype": "$TEST_TRANSPORT", 00:12:08.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:08.310 "adrfam": "ipv4", 00:12:08.310 "trsvcid": "$NVMF_PORT", 00:12:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:08.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:08.310 "hdgst": ${hdgst:-false}, 00:12:08.310 "ddgst": ${ddgst:-false} 00:12:08.310 }, 00:12:08.310 "method": "bdev_nvme_attach_controller" 00:12:08.310 } 00:12:08.310 EOF 00:12:08.310 )") 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:08.310 15:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:08.310 "params": { 00:12:08.310 "name": "Nvme1", 00:12:08.310 "trtype": "tcp", 00:12:08.310 "traddr": "10.0.0.2", 00:12:08.310 "adrfam": "ipv4", 00:12:08.310 "trsvcid": "4420", 00:12:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:08.310 "hdgst": false, 00:12:08.310 "ddgst": false 00:12:08.310 }, 00:12:08.310 "method": "bdev_nvme_attach_controller" 00:12:08.310 }' 00:12:08.310 [2024-11-06 15:16:35.821437] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:08.310 [2024-11-06 15:16:35.821521] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746268 ] 00:12:08.568 [2024-11-06 15:16:35.946412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.568 [2024-11-06 15:16:36.061309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.568 [2024-11-06 15:16:36.061384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.568 [2024-11-06 15:16:36.061405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.132 I/O targets: 00:12:09.132 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:09.132 00:12:09.132 00:12:09.132 CUnit - A unit testing framework for C - Version 2.1-3 00:12:09.132 http://cunit.sourceforge.net/ 00:12:09.132 00:12:09.132 00:12:09.132 Suite: bdevio tests on: Nvme1n1 00:12:09.132 Test: blockdev write read block ...passed 00:12:09.132 Test: blockdev write zeroes read block ...passed 00:12:09.390 Test: blockdev write zeroes read no split ...passed 00:12:09.390 Test: blockdev write zeroes read split ...passed 00:12:09.390 Test: blockdev write zeroes read split partial ...passed 00:12:09.390 Test: blockdev reset ...[2024-11-06 15:16:36.856820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:09.390 [2024-11-06 15:16:36.856936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e680 (9): Bad file descriptor 00:12:09.390 [2024-11-06 15:16:36.926334] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:09.390 passed 00:12:09.390 Test: blockdev write read 8 blocks ...passed 00:12:09.390 Test: blockdev write read size > 128k ...passed 00:12:09.390 Test: blockdev write read invalid size ...passed 00:12:09.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.390 Test: blockdev write read max offset ...passed 00:12:09.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.647 Test: blockdev writev readv 8 blocks ...passed 00:12:09.647 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.647 Test: blockdev writev readv block ...passed 00:12:09.647 Test: blockdev writev readv size > 128k ...passed 00:12:09.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.647 Test: blockdev comparev and writev ...[2024-11-06 15:16:37.144117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.144184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.144490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.144521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.144809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.144840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.144850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.145126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.145141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.145156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.647 [2024-11-06 15:16:37.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:09.647 passed 00:12:09.647 Test: blockdev nvme passthru rw ...passed 00:12:09.647 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:16:37.227662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.647 [2024-11-06 15:16:37.227693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.227828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.647 [2024-11-06 15:16:37.227842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.227962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.647 [2024-11-06 15:16:37.227975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:09.647 [2024-11-06 15:16:37.228096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.647 [2024-11-06 15:16:37.228108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:09.647 passed 00:12:09.647 Test: blockdev nvme admin passthru ...passed 00:12:09.905 Test: blockdev copy ...passed 00:12:09.905 00:12:09.905 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.905 suites 1 1 n/a 0 0 00:12:09.905 tests 23 23 23 0 0 00:12:09.905 asserts 152 152 152 0 n/a 00:12:09.905 00:12:09.905 Elapsed time = 1.323 seconds 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.837 rmmod nvme_tcp 00:12:10.837 rmmod nvme_fabrics 00:12:10.837 rmmod nvme_keyring 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3746018 ']' 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3746018 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3746018 ']' 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3746018 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3746018 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3746018' 00:12:10.837 killing process with pid 3746018 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3746018 00:12:10.837 15:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3746018 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.211 15:16:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.114 00:12:14.114 real 0m13.230s 00:12:14.114 user 0m24.169s 00:12:14.114 sys 0m5.307s 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:14.114 ************************************ 00:12:14.114 END TEST nvmf_bdevio 00:12:14.114 ************************************ 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:14.114 00:12:14.114 real 5m8.362s 00:12:14.114 user 11m59.831s 00:12:14.114 sys 1m41.888s 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.114 15:16:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.114 ************************************ 00:12:14.114 END TEST nvmf_target_core 00:12:14.114 ************************************ 00:12:14.374 15:16:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:14.374 15:16:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:14.374 15:16:41 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.374 15:16:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.374 ************************************ 00:12:14.374 START TEST nvmf_target_extra 00:12:14.374 ************************************ 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:14.374 * Looking for test storage... 00:12:14.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.374 --rc genhtml_branch_coverage=1 00:12:14.374 --rc genhtml_function_coverage=1 00:12:14.374 --rc genhtml_legend=1 00:12:14.374 --rc geninfo_all_blocks=1 00:12:14.374 --rc geninfo_unexecuted_blocks=1 00:12:14.374 00:12:14.374 ' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.374 --rc genhtml_branch_coverage=1 00:12:14.374 --rc genhtml_function_coverage=1 00:12:14.374 --rc genhtml_legend=1 00:12:14.374 --rc geninfo_all_blocks=1 00:12:14.374 --rc geninfo_unexecuted_blocks=1 00:12:14.374 00:12:14.374 ' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.374 --rc genhtml_branch_coverage=1 00:12:14.374 --rc genhtml_function_coverage=1 00:12:14.374 --rc genhtml_legend=1 00:12:14.374 --rc geninfo_all_blocks=1 00:12:14.374 --rc geninfo_unexecuted_blocks=1 00:12:14.374 00:12:14.374 ' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.374 --rc genhtml_branch_coverage=1 00:12:14.374 --rc genhtml_function_coverage=1 00:12:14.374 --rc genhtml_legend=1 00:12:14.374 --rc geninfo_all_blocks=1 00:12:14.374 --rc geninfo_unexecuted_blocks=1 00:12:14.374 00:12:14.374 ' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.374 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.375 15:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.634 ************************************ 00:12:14.634 START TEST nvmf_example 00:12:14.634 ************************************ 00:12:14.634 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:14.635 * Looking for test storage... 00:12:14.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.635 --rc genhtml_branch_coverage=1 00:12:14.635 --rc genhtml_function_coverage=1 00:12:14.635 --rc genhtml_legend=1 00:12:14.635 --rc geninfo_all_blocks=1 00:12:14.635 --rc geninfo_unexecuted_blocks=1 00:12:14.635 00:12:14.635 ' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.635 --rc genhtml_branch_coverage=1 00:12:14.635 --rc genhtml_function_coverage=1 00:12:14.635 --rc genhtml_legend=1 00:12:14.635 --rc geninfo_all_blocks=1 00:12:14.635 --rc geninfo_unexecuted_blocks=1 00:12:14.635 00:12:14.635 ' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.635 --rc genhtml_branch_coverage=1 00:12:14.635 --rc genhtml_function_coverage=1 00:12:14.635 --rc genhtml_legend=1 00:12:14.635 --rc geninfo_all_blocks=1 00:12:14.635 --rc geninfo_unexecuted_blocks=1 00:12:14.635 00:12:14.635 ' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.635 --rc genhtml_branch_coverage=1 00:12:14.635 --rc genhtml_function_coverage=1 00:12:14.635 --rc genhtml_legend=1 00:12:14.635 --rc geninfo_all_blocks=1 00:12:14.635 --rc geninfo_unexecuted_blocks=1 00:12:14.635 00:12:14.635 ' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.635 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.636 15:16:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.227 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.227 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.227 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.227 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.227 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.228 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:12:21.228 00:12:21.228 --- 10.0.0.2 ping statistics --- 00:12:21.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.228 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:21.228 00:12:21.228 --- 10.0.0.1 ping statistics --- 00:12:21.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.228 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3750542 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3750542 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3750542 ']' 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:21.228 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.485 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:21.743 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:33.937 Initializing NVMe Controllers 00:12:33.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:33.937 Initialization complete. Launching workers. 00:12:33.937 ======================================================== 00:12:33.937 Latency(us) 00:12:33.937 Device Information : IOPS MiB/s Average min max 00:12:33.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16648.00 65.03 3845.34 800.67 15398.90 00:12:33.937 ======================================================== 00:12:33.937 Total : 16648.00 65.03 3845.34 800.67 15398.90 00:12:33.937 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:33.937 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.938 rmmod nvme_tcp 00:12:33.938 rmmod nvme_fabrics 00:12:33.938 rmmod nvme_keyring 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3750542 ']' 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3750542 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3750542 ']' 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3750542 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3750542 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3750542' 00:12:33.938 killing process with pid 3750542 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3750542 00:12:33.938 15:16:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3750542 00:12:33.938 nvmf threads initialize successfully 00:12:33.938 bdev subsystem init successfully 00:12:33.938 created a nvmf target service 00:12:33.938 create targets's poll groups done 00:12:33.938 all subsystems of target started 00:12:33.938 nvmf target is running 00:12:33.938 all subsystems of target stopped 00:12:33.938 destroy targets's poll groups done 00:12:33.938 destroyed the nvmf target service 00:12:33.938 bdev subsystem finish successfully 00:12:33.938 nvmf threads destroy successfully 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.938 15:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:35.846 00:12:35.846 real 0m21.061s 00:12:35.846 user 0m49.474s 00:12:35.846 sys 0m6.125s 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:35.846 ************************************ 00:12:35.846 END TEST nvmf_example 00:12:35.846 ************************************ 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.846 ************************************ 00:12:35.846 START TEST nvmf_filesystem 00:12:35.846 ************************************ 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:35.846 * Looking for test storage... 00:12:35.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.846 --rc genhtml_branch_coverage=1 00:12:35.846 --rc genhtml_function_coverage=1 00:12:35.846 --rc genhtml_legend=1 00:12:35.846 --rc geninfo_all_blocks=1 00:12:35.846 --rc geninfo_unexecuted_blocks=1 00:12:35.846 00:12:35.846 ' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.846 --rc genhtml_branch_coverage=1 00:12:35.846 --rc genhtml_function_coverage=1 00:12:35.846 --rc genhtml_legend=1 00:12:35.846 --rc geninfo_all_blocks=1 00:12:35.846 --rc geninfo_unexecuted_blocks=1 00:12:35.846 00:12:35.846 ' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.846 --rc genhtml_branch_coverage=1 00:12:35.846 --rc genhtml_function_coverage=1 00:12:35.846 --rc genhtml_legend=1 00:12:35.846 --rc geninfo_all_blocks=1 00:12:35.846 --rc geninfo_unexecuted_blocks=1 00:12:35.846 00:12:35.846 ' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.846 --rc genhtml_branch_coverage=1 00:12:35.846 --rc genhtml_function_coverage=1 00:12:35.846 --rc genhtml_legend=1 00:12:35.846 --rc geninfo_all_blocks=1 00:12:35.846 --rc geninfo_unexecuted_blocks=1 00:12:35.846 00:12:35.846 ' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:35.846 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:35.847 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:35.847 #define SPDK_CONFIG_H 00:12:35.847 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:35.847 #define SPDK_CONFIG_APPS 1 00:12:35.847 #define SPDK_CONFIG_ARCH native 00:12:35.847 #define SPDK_CONFIG_ASAN 1 00:12:35.847 #undef SPDK_CONFIG_AVAHI 00:12:35.847 #undef SPDK_CONFIG_CET 00:12:35.847 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:35.847 #define SPDK_CONFIG_COVERAGE 1 00:12:35.848 #define SPDK_CONFIG_CROSS_PREFIX 00:12:35.848 #undef SPDK_CONFIG_CRYPTO 00:12:35.848 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:35.848 #undef SPDK_CONFIG_CUSTOMOCF 00:12:35.848 #undef SPDK_CONFIG_DAOS 00:12:35.848 #define SPDK_CONFIG_DAOS_DIR 00:12:35.848 #define SPDK_CONFIG_DEBUG 1 00:12:35.848 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:35.848 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:35.848 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:35.848 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:35.848 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:35.848 #undef SPDK_CONFIG_DPDK_UADK 00:12:35.848 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:35.848 #define SPDK_CONFIG_EXAMPLES 1 00:12:35.848 #undef SPDK_CONFIG_FC 00:12:35.848 #define SPDK_CONFIG_FC_PATH 00:12:35.848 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:35.848 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:35.848 #define SPDK_CONFIG_FSDEV 1 00:12:35.848 #undef SPDK_CONFIG_FUSE 00:12:35.848 #undef SPDK_CONFIG_FUZZER 00:12:35.848 #define SPDK_CONFIG_FUZZER_LIB 00:12:35.848 #undef SPDK_CONFIG_GOLANG 00:12:35.848 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:35.848 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:35.848 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:35.848 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:35.848 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:35.848 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:35.848 #undef SPDK_CONFIG_HAVE_LZ4 00:12:35.848 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:35.848 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:35.848 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:35.848 #define SPDK_CONFIG_IDXD 1 00:12:35.848 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:35.848 #undef SPDK_CONFIG_IPSEC_MB 00:12:35.848 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:35.848 #define SPDK_CONFIG_ISAL 1 00:12:35.848 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:35.848 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:35.848 #define SPDK_CONFIG_LIBDIR 00:12:35.848 #undef SPDK_CONFIG_LTO 00:12:35.848 #define SPDK_CONFIG_MAX_LCORES 128 00:12:35.848 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:35.848 #define SPDK_CONFIG_NVME_CUSE 1 00:12:35.848 #undef SPDK_CONFIG_OCF 00:12:35.848 #define SPDK_CONFIG_OCF_PATH 00:12:35.848 #define SPDK_CONFIG_OPENSSL_PATH 00:12:35.848 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:35.848 #define SPDK_CONFIG_PGO_DIR 00:12:35.848 #undef SPDK_CONFIG_PGO_USE 00:12:35.848 #define SPDK_CONFIG_PREFIX /usr/local 00:12:35.848 #undef SPDK_CONFIG_RAID5F 00:12:35.848 #undef SPDK_CONFIG_RBD 00:12:35.848 #define SPDK_CONFIG_RDMA 1 00:12:35.848 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:35.848 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:35.848 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:35.848 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:35.848 #define SPDK_CONFIG_SHARED 1 00:12:35.848 #undef SPDK_CONFIG_SMA 00:12:35.848 #define SPDK_CONFIG_TESTS 1 00:12:35.848 #undef SPDK_CONFIG_TSAN 00:12:35.848 #define SPDK_CONFIG_UBLK 1 00:12:35.848 #define SPDK_CONFIG_UBSAN 1 00:12:35.848 #undef SPDK_CONFIG_UNIT_TESTS 00:12:35.848 #undef SPDK_CONFIG_URING 00:12:35.848 #define SPDK_CONFIG_URING_PATH 00:12:35.848 #undef SPDK_CONFIG_URING_ZNS 00:12:35.848 #undef SPDK_CONFIG_USDT 00:12:35.848 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:35.848 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:35.848 #undef SPDK_CONFIG_VFIO_USER 00:12:35.848 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:35.848 #define SPDK_CONFIG_VHOST 1 00:12:35.848 #define SPDK_CONFIG_VIRTIO 1 00:12:35.848 #undef SPDK_CONFIG_VTUNE 00:12:35.848 #define SPDK_CONFIG_VTUNE_DIR 00:12:35.848 #define SPDK_CONFIG_WERROR 1 00:12:35.848 #define SPDK_CONFIG_WPDK_DIR 00:12:35.848 #undef SPDK_CONFIG_XNVME 00:12:35.848 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:35.848 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:35.849 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:35.850 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3753143 ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3753143 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.Zi86eh 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Zi86eh/tests/target /tmp/spdk.Zi86eh 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189323427840 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963973632 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6640545792 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97970618368 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981984768 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169753088 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981562880 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981988864 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=425984 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:35.851 * Looking for test storage... 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189323427840 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8855138304 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:35.851 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:12:35.852 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:36.111 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.112 --rc genhtml_branch_coverage=1 00:12:36.112 --rc genhtml_function_coverage=1 00:12:36.112 --rc genhtml_legend=1 00:12:36.112 --rc geninfo_all_blocks=1 00:12:36.112 --rc geninfo_unexecuted_blocks=1 00:12:36.112 00:12:36.112 ' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.112 --rc genhtml_branch_coverage=1 00:12:36.112 --rc genhtml_function_coverage=1 00:12:36.112 --rc genhtml_legend=1 00:12:36.112 --rc geninfo_all_blocks=1 00:12:36.112 --rc geninfo_unexecuted_blocks=1 00:12:36.112 00:12:36.112 ' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.112 --rc genhtml_branch_coverage=1 00:12:36.112 --rc genhtml_function_coverage=1 00:12:36.112 --rc genhtml_legend=1 00:12:36.112 --rc geninfo_all_blocks=1 00:12:36.112 --rc geninfo_unexecuted_blocks=1 00:12:36.112 00:12:36.112 ' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.112 --rc genhtml_branch_coverage=1 00:12:36.112 --rc genhtml_function_coverage=1 00:12:36.112 --rc genhtml_legend=1 00:12:36.112 --rc geninfo_all_blocks=1 00:12:36.112 --rc geninfo_unexecuted_blocks=1 00:12:36.112 00:12:36.112 ' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.112 15:17:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.680 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:42.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:42.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:42.681 Found net devices under 0000:86:00.0: cvl_0_0 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:42.681 Found net devices under 0000:86:00.1: cvl_0_1 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.681 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:12:42.682 00:12:42.682 --- 10.0.0.2 ping statistics --- 00:12:42.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.682 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:42.682 00:12:42.682 --- 10.0.0.1 ping statistics --- 00:12:42.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.682 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:42.682 ************************************ 00:12:42.682 START TEST nvmf_filesystem_no_in_capsule 00:12:42.682 ************************************ 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3756215 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3756215 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3756215 ']' 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.682 15:17:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.682 [2024-11-06 15:17:09.721777] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:42.682 [2024-11-06 15:17:09.721874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.682 [2024-11-06 15:17:09.852549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.682 [2024-11-06 15:17:09.959928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.682 [2024-11-06 15:17:09.959970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.682 [2024-11-06 15:17:09.959982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.682 [2024-11-06 15:17:09.959992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.682 [2024-11-06 15:17:09.960000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.682 [2024-11-06 15:17:09.962336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.682 [2024-11-06 15:17:09.962353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.682 [2024-11-06 15:17:09.962466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.682 [2024-11-06 15:17:09.962488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.941 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:42.941 [2024-11-06 15:17:10.574663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.201 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.201 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:43.201 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.201 15:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 Malloc1 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 [2024-11-06 15:17:11.194481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.769 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:12:43.769 { 00:12:43.769 "name": "Malloc1", 00:12:43.769 "aliases": [ 00:12:43.769 "42b3e573-737e-4948-aed8-293acc0e46bf" 00:12:43.769 ], 00:12:43.769 "product_name": "Malloc disk", 00:12:43.769 "block_size": 512, 00:12:43.769 "num_blocks": 1048576, 00:12:43.769 "uuid": "42b3e573-737e-4948-aed8-293acc0e46bf", 00:12:43.769 "assigned_rate_limits": { 00:12:43.769 "rw_ios_per_sec": 0, 00:12:43.769 "rw_mbytes_per_sec": 0, 00:12:43.769 "r_mbytes_per_sec": 0, 00:12:43.769 "w_mbytes_per_sec": 0 00:12:43.769 }, 00:12:43.769 "claimed": true, 00:12:43.769 "claim_type": "exclusive_write", 00:12:43.769 "zoned": false, 00:12:43.769 "supported_io_types": { 00:12:43.769 "read": true, 00:12:43.769 "write": true, 00:12:43.769 "unmap": true, 00:12:43.769 "flush": true, 00:12:43.770 "reset": true, 00:12:43.770 "nvme_admin": false, 00:12:43.770 "nvme_io": false, 00:12:43.770 "nvme_io_md": false, 00:12:43.770 "write_zeroes": true, 00:12:43.770 "zcopy": true, 00:12:43.770 "get_zone_info": false, 00:12:43.770 "zone_management": false, 00:12:43.770 "zone_append": false, 00:12:43.770 "compare": false, 00:12:43.770 "compare_and_write": false, 00:12:43.770 "abort": true, 00:12:43.770 "seek_hole": false, 00:12:43.770 "seek_data": false, 00:12:43.770 "copy": true, 00:12:43.770 "nvme_iov_md": false 00:12:43.770 }, 00:12:43.770 "memory_domains": [ 00:12:43.770 { 00:12:43.770 "dma_device_id": "system", 00:12:43.770 "dma_device_type": 1 00:12:43.770 }, 00:12:43.770 { 00:12:43.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.770 "dma_device_type": 2 00:12:43.770 } 00:12:43.770 ], 00:12:43.770 "driver_specific": {} 00:12:43.770 } 00:12:43.770 ]' 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:43.770 15:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.145 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.145 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:12:45.145 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.145 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:45.145 15:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:12:47.045 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:47.046 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:47.304 15:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:47.871 15:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.249 ************************************ 00:12:49.249 START TEST filesystem_ext4 00:12:49.249 ************************************ 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:49.249 15:17:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:49.249 mke2fs 1.47.0 (5-Feb-2023) 00:12:49.249 Discarding device blocks: 0/522240 done 00:12:49.249 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:49.249 Filesystem UUID: cf7d4463-222b-4843-a017-8a9f6fcacf05 00:12:49.250 Superblock backups stored on blocks: 00:12:49.250 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:49.250 00:12:49.250 Allocating group tables: 0/64 done 00:12:49.250 Writing inode tables: 0/64 done 00:12:49.508 Creating journal (8192 blocks): done 00:12:49.508 Writing superblocks and filesystem accounting information: 0/64 done 00:12:49.508 00:12:49.508 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:49.508 15:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3756215 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:56.071 00:12:56.071 real 0m6.123s 00:12:56.071 user 0m0.026s 00:12:56.071 sys 0m0.069s 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:56.071 ************************************ 00:12:56.071 END TEST filesystem_ext4 00:12:56.071 ************************************ 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:56.071 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.072 ************************************ 00:12:56.072 START TEST filesystem_btrfs 00:12:56.072 ************************************ 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:56.072 btrfs-progs v6.8.1 00:12:56.072 See https://btrfs.readthedocs.io for more information. 00:12:56.072 00:12:56.072 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:56.072 NOTE: several default settings have changed in version 5.15, please make sure 00:12:56.072 this does not affect your deployments: 00:12:56.072 - DUP for metadata (-m dup) 00:12:56.072 - enabled no-holes (-O no-holes) 00:12:56.072 - enabled free-space-tree (-R free-space-tree) 00:12:56.072 00:12:56.072 Label: (null) 00:12:56.072 UUID: 382a94a3-771a-4dc6-a14e-02b362f64ad1 00:12:56.072 Node size: 16384 00:12:56.072 Sector size: 4096 (CPU page size: 4096) 00:12:56.072 Filesystem size: 510.00MiB 00:12:56.072 Block group profiles: 00:12:56.072 Data: single 8.00MiB 00:12:56.072 Metadata: DUP 32.00MiB 00:12:56.072 System: DUP 8.00MiB 00:12:56.072 SSD detected: yes 00:12:56.072 Zoned device: no 00:12:56.072 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:56.072 Checksum: crc32c 00:12:56.072 Number of devices: 1 00:12:56.072 Devices: 00:12:56.072 ID SIZE PATH 00:12:56.072 1 510.00MiB /dev/nvme0n1p1 00:12:56.072 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:56.072 15:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3756215 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:56.331 00:12:56.331 real 0m1.202s 00:12:56.331 user 0m0.037s 00:12:56.331 sys 0m0.101s 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:56.331 ************************************ 00:12:56.331 END TEST filesystem_btrfs 00:12:56.331 ************************************ 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.331 ************************************ 00:12:56.331 START TEST filesystem_xfs 00:12:56.331 ************************************ 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:56.331 15:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:56.589 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:56.590 = sectsz=512 attr=2, projid32bit=1 00:12:56.590 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:56.590 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:56.590 data = bsize=4096 blocks=130560, imaxpct=25 00:12:56.590 = sunit=0 swidth=0 blks 00:12:56.590 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:56.590 log =internal log bsize=4096 blocks=16384, version=2 00:12:56.590 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:56.590 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:57.524 Discarding blocks...Done. 00:12:57.524 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:57.524 15:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3756215 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:59.425 00:12:59.425 real 0m2.767s 00:12:59.425 user 0m0.020s 00:12:59.425 sys 0m0.076s 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:59.425 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:59.425 ************************************ 00:12:59.425 END TEST filesystem_xfs 00:12:59.425 ************************************ 00:12:59.426 15:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:59.426 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:59.426 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3756215 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3756215 ']' 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3756215 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3756215 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.685 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.686 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3756215' 00:12:59.686 killing process with pid 3756215 00:12:59.686 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3756215 00:12:59.686 15:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3756215 00:13:02.220 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:02.220 00:13:02.220 real 0m20.220s 00:13:02.220 user 1m18.083s 00:13:02.220 sys 0m1.593s 00:13:02.220 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:02.220 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.220 ************************************ 00:13:02.220 END TEST nvmf_filesystem_no_in_capsule 00:13:02.220 ************************************ 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:02.479 ************************************ 00:13:02.479 START TEST nvmf_filesystem_in_capsule 00:13:02.479 ************************************ 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3759875 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3759875 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3759875 ']' 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.479 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.479 [2024-11-06 15:17:30.009552] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:02.479 [2024-11-06 15:17:30.009646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.738 [2024-11-06 15:17:30.145167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.738 [2024-11-06 15:17:30.255489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.738 [2024-11-06 15:17:30.255538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.738 [2024-11-06 15:17:30.255548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.738 [2024-11-06 15:17:30.255558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.738 [2024-11-06 15:17:30.255567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.738 [2024-11-06 15:17:30.258133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.738 [2024-11-06 15:17:30.258243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.738 [2024-11-06 15:17:30.258293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.739 [2024-11-06 15:17:30.258314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.306 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.307 [2024-11-06 15:17:30.868844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.307 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.961 Malloc1 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.961 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.962 [2024-11-06 15:17:31.477798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:03.962 { 00:13:03.962 "name": "Malloc1", 00:13:03.962 "aliases": [ 00:13:03.962 "5fe071a6-318a-4978-a34f-66777a2b9988" 00:13:03.962 ], 00:13:03.962 "product_name": "Malloc disk", 00:13:03.962 "block_size": 512, 00:13:03.962 "num_blocks": 1048576, 00:13:03.962 "uuid": "5fe071a6-318a-4978-a34f-66777a2b9988", 00:13:03.962 "assigned_rate_limits": { 00:13:03.962 "rw_ios_per_sec": 0, 00:13:03.962 "rw_mbytes_per_sec": 0, 00:13:03.962 "r_mbytes_per_sec": 0, 00:13:03.962 "w_mbytes_per_sec": 0 00:13:03.962 }, 00:13:03.962 "claimed": true, 00:13:03.962 "claim_type": "exclusive_write", 00:13:03.962 "zoned": false, 00:13:03.962 "supported_io_types": { 00:13:03.962 "read": true, 00:13:03.962 "write": true, 00:13:03.962 "unmap": true, 00:13:03.962 "flush": true, 00:13:03.962 "reset": true, 00:13:03.962 "nvme_admin": false, 00:13:03.962 "nvme_io": false, 00:13:03.962 "nvme_io_md": false, 00:13:03.962 "write_zeroes": true, 00:13:03.962 "zcopy": true, 00:13:03.962 "get_zone_info": false, 00:13:03.962 "zone_management": false, 00:13:03.962 "zone_append": false, 00:13:03.962 "compare": false, 00:13:03.962 "compare_and_write": false, 00:13:03.962 "abort": true, 00:13:03.962 "seek_hole": false, 00:13:03.962 "seek_data": false, 00:13:03.962 "copy": true, 00:13:03.962 "nvme_iov_md": false 00:13:03.962 }, 00:13:03.962 "memory_domains": [ 00:13:03.962 { 00:13:03.962 "dma_device_id": "system", 00:13:03.962 "dma_device_type": 1 00:13:03.962 }, 00:13:03.962 { 00:13:03.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.962 "dma_device_type": 2 00:13:03.962 } 00:13:03.962 ], 00:13:03.962 "driver_specific": {} 00:13:03.962 } 00:13:03.962 ]' 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:03.962 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.438 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.438 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:05.438 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.438 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:05.438 15:17:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:07.341 15:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:07.909 15:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.844 ************************************ 00:13:08.844 START TEST filesystem_in_capsule_ext4 00:13:08.844 ************************************ 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:08.844 15:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:08.844 mke2fs 1.47.0 (5-Feb-2023) 00:13:08.844 Discarding device blocks: 0/522240 done 00:13:08.844 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:08.844 Filesystem UUID: d07eb538-b38c-466b-b721-819e69b3b121 00:13:08.844 Superblock backups stored on blocks: 00:13:08.844 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:08.844 00:13:08.844 Allocating group tables: 0/64 done 00:13:08.844 Writing inode tables: 0/64 done 00:13:09.102 Creating journal (8192 blocks): done 00:13:10.994 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:13:10.994 00:13:10.994 15:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:10.994 15:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.556 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3759875 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.556 00:13:17.556 real 0m7.718s 00:13:17.556 user 0m0.023s 00:13:17.556 sys 0m0.080s 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 ************************************ 00:13:17.556 END TEST filesystem_in_capsule_ext4 00:13:17.556 ************************************ 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 ************************************ 00:13:17.556 START TEST filesystem_in_capsule_btrfs 00:13:17.556 ************************************ 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:17.556 btrfs-progs v6.8.1 00:13:17.556 See https://btrfs.readthedocs.io for more information. 00:13:17.556 00:13:17.556 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:17.556 NOTE: several default settings have changed in version 5.15, please make sure 00:13:17.556 this does not affect your deployments: 00:13:17.556 - DUP for metadata (-m dup) 00:13:17.556 - enabled no-holes (-O no-holes) 00:13:17.556 - enabled free-space-tree (-R free-space-tree) 00:13:17.556 00:13:17.556 Label: (null) 00:13:17.556 UUID: 954b06e3-c88c-49a1-a65f-c08aabd35000 00:13:17.556 Node size: 16384 00:13:17.556 Sector size: 4096 (CPU page size: 4096) 00:13:17.556 Filesystem size: 510.00MiB 00:13:17.556 Block group profiles: 00:13:17.556 Data: single 8.00MiB 00:13:17.556 Metadata: DUP 32.00MiB 00:13:17.556 System: DUP 8.00MiB 00:13:17.556 SSD detected: yes 00:13:17.556 Zoned device: no 00:13:17.556 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:17.556 Checksum: crc32c 00:13:17.556 Number of devices: 1 00:13:17.556 Devices: 00:13:17.556 ID SIZE PATH 00:13:17.556 1 510.00MiB /dev/nvme0n1p1 00:13:17.556 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3759875 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.556 00:13:17.556 real 0m0.500s 00:13:17.556 user 0m0.023s 00:13:17.556 sys 0m0.117s 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 ************************************ 00:13:17.556 END TEST filesystem_in_capsule_btrfs 00:13:17.556 ************************************ 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 ************************************ 00:13:17.556 START TEST filesystem_in_capsule_xfs 00:13:17.556 ************************************ 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:17.556 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:17.556 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:17.556 = sectsz=512 attr=2, projid32bit=1 00:13:17.556 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:17.556 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:17.556 data = bsize=4096 blocks=130560, imaxpct=25 00:13:17.556 = sunit=0 swidth=0 blks 00:13:17.556 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:17.557 log =internal log bsize=4096 blocks=16384, version=2 00:13:17.557 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:17.557 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:18.123 Discarding blocks...Done. 00:13:18.123 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:18.123 15:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3759875 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:20.026 00:13:20.026 real 0m2.691s 00:13:20.026 user 0m0.032s 00:13:20.026 sys 0m0.065s 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:20.026 ************************************ 00:13:20.026 END TEST filesystem_in_capsule_xfs 00:13:20.026 ************************************ 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:20.026 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3759875 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3759875 ']' 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3759875 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3759875 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3759875' 00:13:20.286 killing process with pid 3759875 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3759875 00:13:20.286 15:17:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3759875 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:23.572 00:13:23.572 real 0m20.597s 00:13:23.572 user 1m19.568s 00:13:23.572 sys 0m1.607s 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.572 ************************************ 00:13:23.572 END TEST nvmf_filesystem_in_capsule 00:13:23.572 ************************************ 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.572 rmmod nvme_tcp 00:13:23.572 rmmod nvme_fabrics 00:13:23.572 rmmod nvme_keyring 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.572 15:17:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.476 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.476 00:13:25.476 real 0m49.547s 00:13:25.476 user 2m39.613s 00:13:25.476 sys 0m7.982s 00:13:25.476 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.476 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.476 ************************************ 00:13:25.476 END TEST nvmf_filesystem 00:13:25.477 ************************************ 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.477 ************************************ 00:13:25.477 START TEST nvmf_target_discovery 00:13:25.477 ************************************ 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:25.477 * Looking for test storage... 00:13:25.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.477 --rc genhtml_branch_coverage=1 00:13:25.477 --rc genhtml_function_coverage=1 00:13:25.477 --rc genhtml_legend=1 00:13:25.477 --rc geninfo_all_blocks=1 00:13:25.477 --rc geninfo_unexecuted_blocks=1 00:13:25.477 00:13:25.477 ' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.477 --rc genhtml_branch_coverage=1 00:13:25.477 --rc genhtml_function_coverage=1 00:13:25.477 --rc genhtml_legend=1 00:13:25.477 --rc geninfo_all_blocks=1 00:13:25.477 --rc geninfo_unexecuted_blocks=1 00:13:25.477 00:13:25.477 ' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.477 --rc genhtml_branch_coverage=1 00:13:25.477 --rc genhtml_function_coverage=1 00:13:25.477 --rc genhtml_legend=1 00:13:25.477 --rc geninfo_all_blocks=1 00:13:25.477 --rc geninfo_unexecuted_blocks=1 00:13:25.477 00:13:25.477 ' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.477 --rc genhtml_branch_coverage=1 00:13:25.477 --rc genhtml_function_coverage=1 00:13:25.477 --rc genhtml_legend=1 00:13:25.477 --rc geninfo_all_blocks=1 00:13:25.477 --rc geninfo_unexecuted_blocks=1 00:13:25.477 00:13:25.477 ' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.477 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.478 15:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.045 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:32.046 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:32.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:32.046 Found net devices under 0000:86:00.0: cvl_0_0 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:32.046 Found net devices under 0000:86:00.1: cvl_0_1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:13:32.046 00:13:32.046 --- 10.0.0.2 ping statistics --- 00:13:32.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.046 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:32.046 00:13:32.046 --- 10.0.0.1 ping statistics --- 00:13:32.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.046 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3766855 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3766855 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3766855 ']' 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.046 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.047 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.047 15:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.047 [2024-11-06 15:17:59.052946] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:32.047 [2024-11-06 15:17:59.053044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.047 [2024-11-06 15:17:59.187336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.047 [2024-11-06 15:17:59.296823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.047 [2024-11-06 15:17:59.296872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.047 [2024-11-06 15:17:59.296882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.047 [2024-11-06 15:17:59.296892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.047 [2024-11-06 15:17:59.296901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.047 [2024-11-06 15:17:59.299353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.047 [2024-11-06 15:17:59.299432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.047 [2024-11-06 15:17:59.299500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.047 [2024-11-06 15:17:59.299521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.304 [2024-11-06 15:17:59.898168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.304 Null1 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.304 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 [2024-11-06 15:17:59.959139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 Null2 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:17:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 Null3 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 Null4 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.563 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:32.822 00:13:32.822 Discovery Log Number of Records 6, Generation counter 6 00:13:32.822 =====Discovery Log Entry 0====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: current discovery subsystem 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4420 00:13:32.822 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: explicit discovery connections, duplicate discovery information 00:13:32.822 sectype: none 00:13:32.822 =====Discovery Log Entry 1====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: nvme subsystem 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4420 00:13:32.822 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: none 00:13:32.822 sectype: none 00:13:32.822 =====Discovery Log Entry 2====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: nvme subsystem 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4420 00:13:32.822 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: none 00:13:32.822 sectype: none 00:13:32.822 =====Discovery Log Entry 3====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: nvme subsystem 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4420 00:13:32.822 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: none 00:13:32.822 sectype: none 00:13:32.822 =====Discovery Log Entry 4====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: nvme subsystem 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4420 00:13:32.822 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: none 00:13:32.822 sectype: none 00:13:32.822 =====Discovery Log Entry 5====== 00:13:32.822 trtype: tcp 00:13:32.822 adrfam: ipv4 00:13:32.822 subtype: discovery subsystem referral 00:13:32.822 treq: not required 00:13:32.822 portid: 0 00:13:32.822 trsvcid: 4430 00:13:32.822 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:32.822 traddr: 10.0.0.2 00:13:32.822 eflags: none 00:13:32.822 sectype: none 00:13:32.822 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:32.822 Perform nvmf subsystem discovery via RPC 00:13:32.822 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:32.822 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 [ 00:13:32.823 { 00:13:32.823 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.823 "subtype": "Discovery", 00:13:32.823 "listen_addresses": [ 00:13:32.823 { 00:13:32.823 "trtype": "TCP", 00:13:32.823 "adrfam": "IPv4", 00:13:32.823 "traddr": "10.0.0.2", 00:13:32.823 "trsvcid": "4420" 00:13:32.823 } 00:13:32.823 ], 00:13:32.823 "allow_any_host": true, 00:13:32.823 "hosts": [] 00:13:32.823 }, 00:13:32.823 { 00:13:32.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.823 "subtype": "NVMe", 00:13:32.823 "listen_addresses": [ 00:13:32.823 { 00:13:32.823 "trtype": "TCP", 00:13:32.823 "adrfam": "IPv4", 00:13:32.823 "traddr": "10.0.0.2", 00:13:32.823 "trsvcid": "4420" 00:13:32.823 } 00:13:32.823 ], 00:13:32.823 "allow_any_host": true, 00:13:32.823 "hosts": [], 00:13:32.823 "serial_number": "SPDK00000000000001", 00:13:32.823 "model_number": "SPDK bdev Controller", 00:13:32.823 "max_namespaces": 32, 00:13:32.823 "min_cntlid": 1, 00:13:32.823 "max_cntlid": 65519, 00:13:32.823 "namespaces": [ 00:13:32.823 { 00:13:32.823 "nsid": 1, 00:13:32.823 "bdev_name": "Null1", 00:13:32.823 "name": "Null1", 00:13:32.823 "nguid": "6E81A860ED05410F800FCE8FA67E4214", 00:13:32.823 "uuid": "6e81a860-ed05-410f-800f-ce8fa67e4214" 00:13:32.823 } 00:13:32.823 ] 00:13:32.823 }, 00:13:32.823 { 00:13:32.823 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:32.823 "subtype": "NVMe", 00:13:32.823 "listen_addresses": [ 00:13:32.823 { 00:13:32.823 "trtype": "TCP", 00:13:32.823 "adrfam": "IPv4", 00:13:32.823 "traddr": "10.0.0.2", 00:13:32.823 "trsvcid": "4420" 00:13:32.823 } 00:13:32.823 ], 00:13:32.823 "allow_any_host": true, 00:13:32.823 "hosts": [], 00:13:32.823 "serial_number": "SPDK00000000000002", 00:13:32.823 "model_number": "SPDK bdev Controller", 00:13:32.823 "max_namespaces": 32, 00:13:32.823 "min_cntlid": 1, 00:13:32.823 "max_cntlid": 65519, 00:13:32.823 "namespaces": [ 00:13:32.823 { 00:13:32.823 "nsid": 1, 00:13:32.823 "bdev_name": "Null2", 00:13:32.823 "name": "Null2", 00:13:32.823 "nguid": "96BE256A65C04573A1966BBD2A026ADE", 00:13:32.823 "uuid": "96be256a-65c0-4573-a196-6bbd2a026ade" 00:13:32.823 } 00:13:32.823 ] 00:13:32.823 }, 00:13:32.823 { 00:13:32.823 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:32.823 "subtype": "NVMe", 00:13:32.823 "listen_addresses": [ 00:13:32.823 { 00:13:32.823 "trtype": "TCP", 00:13:32.823 "adrfam": "IPv4", 00:13:32.823 "traddr": "10.0.0.2", 00:13:32.823 "trsvcid": "4420" 00:13:32.823 } 00:13:32.823 ], 00:13:32.823 "allow_any_host": true, 00:13:32.823 "hosts": [], 00:13:32.823 "serial_number": "SPDK00000000000003", 00:13:32.823 "model_number": "SPDK bdev Controller", 00:13:32.823 "max_namespaces": 32, 00:13:32.823 "min_cntlid": 1, 00:13:32.823 "max_cntlid": 65519, 00:13:32.823 "namespaces": [ 00:13:32.823 { 00:13:32.823 "nsid": 1, 00:13:32.823 "bdev_name": "Null3", 00:13:32.823 "name": "Null3", 00:13:32.823 "nguid": "8A4756B87C7647659625B7C4A94A3996", 00:13:32.823 "uuid": "8a4756b8-7c76-4765-9625-b7c4a94a3996" 00:13:32.823 } 00:13:32.823 ] 00:13:32.823 }, 00:13:32.823 { 00:13:32.823 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:32.823 "subtype": "NVMe", 00:13:32.823 "listen_addresses": [ 00:13:32.823 { 00:13:32.823 "trtype": "TCP", 00:13:32.823 "adrfam": "IPv4", 00:13:32.823 "traddr": "10.0.0.2", 00:13:32.823 "trsvcid": "4420" 00:13:32.823 } 00:13:32.823 ], 00:13:32.823 "allow_any_host": true, 00:13:32.823 "hosts": [], 00:13:32.823 "serial_number": "SPDK00000000000004", 00:13:32.823 "model_number": "SPDK bdev Controller", 00:13:32.823 "max_namespaces": 32, 00:13:32.823 "min_cntlid": 1, 00:13:32.823 "max_cntlid": 65519, 00:13:32.823 "namespaces": [ 00:13:32.823 { 00:13:32.823 "nsid": 1, 00:13:32.823 "bdev_name": "Null4", 00:13:32.823 "name": "Null4", 00:13:32.823 "nguid": "C296F2457FD34C618EA0462D4696B56E", 00:13:32.823 "uuid": "c296f245-7fd3-4c61-8ea0-462d4696b56e" 00:13:32.823 } 00:13:32.823 ] 00:13:32.823 } 00:13:32.823 ] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.823 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.823 rmmod nvme_tcp 00:13:33.082 rmmod nvme_fabrics 00:13:33.082 rmmod nvme_keyring 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3766855 ']' 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3766855 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3766855 ']' 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3766855 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3766855 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3766855' 00:13:33.082 killing process with pid 3766855 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3766855 00:13:33.082 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3766855 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.458 15:18:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.363 00:13:36.363 real 0m11.003s 00:13:36.363 user 0m10.629s 00:13:36.363 sys 0m4.921s 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:36.363 ************************************ 00:13:36.363 END TEST nvmf_target_discovery 00:13:36.363 ************************************ 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.363 ************************************ 00:13:36.363 START TEST nvmf_referrals 00:13:36.363 ************************************ 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:36.363 * Looking for test storage... 00:13:36.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:13:36.363 15:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.623 --rc genhtml_branch_coverage=1 00:13:36.623 --rc genhtml_function_coverage=1 00:13:36.623 --rc genhtml_legend=1 00:13:36.623 --rc geninfo_all_blocks=1 00:13:36.623 --rc geninfo_unexecuted_blocks=1 00:13:36.623 00:13:36.623 ' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.623 --rc genhtml_branch_coverage=1 00:13:36.623 --rc genhtml_function_coverage=1 00:13:36.623 --rc genhtml_legend=1 00:13:36.623 --rc geninfo_all_blocks=1 00:13:36.623 --rc geninfo_unexecuted_blocks=1 00:13:36.623 00:13:36.623 ' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.623 --rc genhtml_branch_coverage=1 00:13:36.623 --rc genhtml_function_coverage=1 00:13:36.623 --rc genhtml_legend=1 00:13:36.623 --rc geninfo_all_blocks=1 00:13:36.623 --rc geninfo_unexecuted_blocks=1 00:13:36.623 00:13:36.623 ' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:36.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.623 --rc genhtml_branch_coverage=1 00:13:36.623 --rc genhtml_function_coverage=1 00:13:36.623 --rc genhtml_legend=1 00:13:36.623 --rc geninfo_all_blocks=1 00:13:36.623 --rc geninfo_unexecuted_blocks=1 00:13:36.623 00:13:36.623 ' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.623 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.624 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.194 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.195 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.195 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.195 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.195 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:13:43.195 00:13:43.195 --- 10.0.0.2 ping statistics --- 00:13:43.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.195 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:13:43.195 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:13:43.195 00:13:43.195 --- 10.0.0.1 ping statistics --- 00:13:43.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.195 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3771334 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3771334 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3771334 ']' 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.195 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.196 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.196 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.196 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.196 [2024-11-06 15:18:10.144361] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:43.196 [2024-11-06 15:18:10.144472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.196 [2024-11-06 15:18:10.278177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.196 [2024-11-06 15:18:10.394919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.196 [2024-11-06 15:18:10.394960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.196 [2024-11-06 15:18:10.394972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.196 [2024-11-06 15:18:10.394982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.196 [2024-11-06 15:18:10.394990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.196 [2024-11-06 15:18:10.397653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.196 [2024-11-06 15:18:10.397706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.196 [2024-11-06 15:18:10.397790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.196 [2024-11-06 15:18:10.397768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 [2024-11-06 15:18:10.992853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 [2024-11-06 15:18:11.018947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.454 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:43.712 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.713 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:43.971 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:43.972 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.231 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.489 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:44.748 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.006 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:45.265 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:45.524 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:45.524 15:18:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.524 rmmod nvme_tcp 00:13:45.524 rmmod nvme_fabrics 00:13:45.524 rmmod nvme_keyring 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3771334 ']' 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3771334 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3771334 ']' 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3771334 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3771334 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3771334' 00:13:45.524 killing process with pid 3771334 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3771334 00:13:45.524 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3771334 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.903 15:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.807 00:13:48.807 real 0m12.469s 00:13:48.807 user 0m17.064s 00:13:48.807 sys 0m5.396s 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.807 ************************************ 00:13:48.807 END TEST nvmf_referrals 00:13:48.807 ************************************ 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.807 ************************************ 00:13:48.807 START TEST nvmf_connect_disconnect 00:13:48.807 ************************************ 00:13:48.807 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:49.067 * Looking for test storage... 00:13:49.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:49.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.067 --rc genhtml_branch_coverage=1 00:13:49.067 --rc genhtml_function_coverage=1 00:13:49.067 --rc genhtml_legend=1 00:13:49.067 --rc geninfo_all_blocks=1 00:13:49.067 --rc geninfo_unexecuted_blocks=1 00:13:49.067 00:13:49.067 ' 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:49.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.067 --rc genhtml_branch_coverage=1 00:13:49.067 --rc genhtml_function_coverage=1 00:13:49.067 --rc genhtml_legend=1 00:13:49.067 --rc geninfo_all_blocks=1 00:13:49.067 --rc geninfo_unexecuted_blocks=1 00:13:49.067 00:13:49.067 ' 00:13:49.067 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:49.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.067 --rc genhtml_branch_coverage=1 00:13:49.067 --rc genhtml_function_coverage=1 00:13:49.068 --rc genhtml_legend=1 00:13:49.068 --rc geninfo_all_blocks=1 00:13:49.068 --rc geninfo_unexecuted_blocks=1 00:13:49.068 00:13:49.068 ' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.068 --rc genhtml_branch_coverage=1 00:13:49.068 --rc genhtml_function_coverage=1 00:13:49.068 --rc genhtml_legend=1 00:13:49.068 --rc geninfo_all_blocks=1 00:13:49.068 --rc geninfo_unexecuted_blocks=1 00:13:49.068 00:13:49.068 ' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.068 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.069 15:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.637 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.637 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.637 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:55.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:55.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:55.638 Found net devices under 0000:86:00.0: cvl_0_0 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:55.638 Found net devices under 0000:86:00.1: cvl_0_1 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.638 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.654 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:13:55.654 00:13:55.654 --- 10.0.0.2 ping statistics --- 00:13:55.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.654 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:13:55.655 00:13:55.655 --- 10.0.0.1 ping statistics --- 00:13:55.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.655 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3775688 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3775688 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3775688 ']' 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:55.655 15:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.655 [2024-11-06 15:18:22.599179] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:55.655 [2024-11-06 15:18:22.599292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.655 [2024-11-06 15:18:22.724857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.655 [2024-11-06 15:18:22.833635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.655 [2024-11-06 15:18:22.833682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.655 [2024-11-06 15:18:22.833692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.655 [2024-11-06 15:18:22.833702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.655 [2024-11-06 15:18:22.833710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.655 [2024-11-06 15:18:22.836162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.655 [2024-11-06 15:18:22.836259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.655 [2024-11-06 15:18:22.836324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.655 [2024-11-06 15:18:22.836345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.913 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:55.913 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:13:55.913 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:55.914 [2024-11-06 15:18:23.467505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.914 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:56.173 [2024-11-06 15:18:23.590863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:56.173 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:58.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:50.617 rmmod nvme_tcp 00:17:50.617 rmmod nvme_fabrics 00:17:50.617 rmmod nvme_keyring 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3775688 ']' 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3775688 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3775688 ']' 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3775688 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3775688 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3775688' 00:17:50.617 killing process with pid 3775688 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3775688 00:17:50.617 15:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3775688 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.996 15:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.900 00:17:53.900 real 4m4.877s 00:17:53.900 user 15m33.602s 00:17:53.900 sys 0m25.696s 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:53.900 ************************************ 00:17:53.900 END TEST nvmf_connect_disconnect 00:17:53.900 ************************************ 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.900 ************************************ 00:17:53.900 START TEST nvmf_multitarget 00:17:53.900 ************************************ 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:53.900 * Looking for test storage... 00:17:53.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.900 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.901 --rc genhtml_branch_coverage=1 00:17:53.901 --rc genhtml_function_coverage=1 00:17:53.901 --rc genhtml_legend=1 00:17:53.901 --rc geninfo_all_blocks=1 00:17:53.901 --rc geninfo_unexecuted_blocks=1 00:17:53.901 00:17:53.901 ' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.901 --rc genhtml_branch_coverage=1 00:17:53.901 --rc genhtml_function_coverage=1 00:17:53.901 --rc genhtml_legend=1 00:17:53.901 --rc geninfo_all_blocks=1 00:17:53.901 --rc geninfo_unexecuted_blocks=1 00:17:53.901 00:17:53.901 ' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.901 --rc genhtml_branch_coverage=1 00:17:53.901 --rc genhtml_function_coverage=1 00:17:53.901 --rc genhtml_legend=1 00:17:53.901 --rc geninfo_all_blocks=1 00:17:53.901 --rc geninfo_unexecuted_blocks=1 00:17:53.901 00:17:53.901 ' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:53.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.901 --rc genhtml_branch_coverage=1 00:17:53.901 --rc genhtml_function_coverage=1 00:17:53.901 --rc genhtml_legend=1 00:17:53.901 --rc geninfo_all_blocks=1 00:17:53.901 --rc geninfo_unexecuted_blocks=1 00:17:53.901 00:17:53.901 ' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.901 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.161 15:22:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:00.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:00.739 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:00.739 Found net devices under 0000:86:00.0: cvl_0_0 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.739 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:00.739 Found net devices under 0000:86:00.1: cvl_0_1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:18:00.740 00:18:00.740 --- 10.0.0.2 ping statistics --- 00:18:00.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.740 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:18:00.740 00:18:00.740 --- 10.0.0.1 ping statistics --- 00:18:00.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.740 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3819488 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3819488 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3819488 ']' 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.740 15:22:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:00.740 [2024-11-06 15:22:27.583101] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:00.740 [2024-11-06 15:22:27.583220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.740 [2024-11-06 15:22:27.716859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.740 [2024-11-06 15:22:27.827252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.740 [2024-11-06 15:22:27.827298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.740 [2024-11-06 15:22:27.827310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.740 [2024-11-06 15:22:27.827321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.740 [2024-11-06 15:22:27.827330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.740 [2024-11-06 15:22:27.830029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.740 [2024-11-06 15:22:27.830122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.740 [2024-11-06 15:22:27.830127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.740 [2024-11-06 15:22:27.830147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:00.999 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:01.000 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:01.000 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:01.000 "nvmf_tgt_1" 00:18:01.258 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:01.258 "nvmf_tgt_2" 00:18:01.258 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:01.258 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:01.258 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:01.258 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:01.517 true 00:18:01.517 15:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:01.517 true 00:18:01.517 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:01.517 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.776 rmmod nvme_tcp 00:18:01.776 rmmod nvme_fabrics 00:18:01.776 rmmod nvme_keyring 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3819488 ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3819488 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3819488 ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3819488 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3819488 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3819488' 00:18:01.776 killing process with pid 3819488 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3819488 00:18:01.776 15:22:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3819488 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.154 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.155 15:22:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:05.184 00:18:05.184 real 0m11.198s 00:18:05.184 user 0m12.529s 00:18:05.184 sys 0m5.071s 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:05.184 ************************************ 00:18:05.184 END TEST nvmf_multitarget 00:18:05.184 ************************************ 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.184 ************************************ 00:18:05.184 START TEST nvmf_rpc 00:18:05.184 ************************************ 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:05.184 * Looking for test storage... 00:18:05.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:05.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.184 --rc genhtml_branch_coverage=1 00:18:05.184 --rc genhtml_function_coverage=1 00:18:05.184 --rc genhtml_legend=1 00:18:05.184 --rc geninfo_all_blocks=1 00:18:05.184 --rc geninfo_unexecuted_blocks=1 00:18:05.184 00:18:05.184 ' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:05.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.184 --rc genhtml_branch_coverage=1 00:18:05.184 --rc genhtml_function_coverage=1 00:18:05.184 --rc genhtml_legend=1 00:18:05.184 --rc geninfo_all_blocks=1 00:18:05.184 --rc geninfo_unexecuted_blocks=1 00:18:05.184 00:18:05.184 ' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:05.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.184 --rc genhtml_branch_coverage=1 00:18:05.184 --rc genhtml_function_coverage=1 00:18:05.184 --rc genhtml_legend=1 00:18:05.184 --rc geninfo_all_blocks=1 00:18:05.184 --rc geninfo_unexecuted_blocks=1 00:18:05.184 00:18:05.184 ' 00:18:05.184 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.185 --rc genhtml_branch_coverage=1 00:18:05.185 --rc genhtml_function_coverage=1 00:18:05.185 --rc genhtml_legend=1 00:18:05.185 --rc geninfo_all_blocks=1 00:18:05.185 --rc geninfo_unexecuted_blocks=1 00:18:05.185 00:18:05.185 ' 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.185 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.444 15:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:12.017 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:12.018 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:12.018 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:12.018 Found net devices under 0000:86:00.0: cvl_0_0 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:12.018 Found net devices under 0000:86:00.1: cvl_0_1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:12.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:18:12.018 00:18:12.018 --- 10.0.0.2 ping statistics --- 00:18:12.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.018 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:18:12.018 00:18:12.018 --- 10.0.0.1 ping statistics --- 00:18:12.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.018 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3823453 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3823453 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3823453 ']' 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.018 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.019 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.019 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.019 15:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.019 [2024-11-06 15:22:38.845722] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:12.019 [2024-11-06 15:22:38.845809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.019 [2024-11-06 15:22:38.976731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.019 [2024-11-06 15:22:39.091975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.019 [2024-11-06 15:22:39.092019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.019 [2024-11-06 15:22:39.092029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.019 [2024-11-06 15:22:39.092039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.019 [2024-11-06 15:22:39.092047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.019 [2024-11-06 15:22:39.094513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.019 [2024-11-06 15:22:39.094602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.019 [2024-11-06 15:22:39.094669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.019 [2024-11-06 15:22:39.094693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.019 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.019 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:18:12.019 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.019 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.019 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:12.278 "tick_rate": 2100000000, 00:18:12.278 "poll_groups": [ 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_000", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_001", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_002", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_003", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [] 00:18:12.278 } 00:18:12.278 ] 00:18:12.278 }' 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.278 [2024-11-06 15:22:39.792875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.278 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:12.278 "tick_rate": 2100000000, 00:18:12.278 "poll_groups": [ 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_000", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [ 00:18:12.278 { 00:18:12.278 "trtype": "TCP" 00:18:12.278 } 00:18:12.278 ] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_001", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [ 00:18:12.278 { 00:18:12.278 "trtype": "TCP" 00:18:12.278 } 00:18:12.278 ] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_002", 00:18:12.278 "admin_qpairs": 0, 00:18:12.278 "io_qpairs": 0, 00:18:12.278 "current_admin_qpairs": 0, 00:18:12.278 "current_io_qpairs": 0, 00:18:12.278 "pending_bdev_io": 0, 00:18:12.278 "completed_nvme_io": 0, 00:18:12.278 "transports": [ 00:18:12.278 { 00:18:12.278 "trtype": "TCP" 00:18:12.278 } 00:18:12.278 ] 00:18:12.278 }, 00:18:12.278 { 00:18:12.278 "name": "nvmf_tgt_poll_group_003", 00:18:12.279 "admin_qpairs": 0, 00:18:12.279 "io_qpairs": 0, 00:18:12.279 "current_admin_qpairs": 0, 00:18:12.279 "current_io_qpairs": 0, 00:18:12.279 "pending_bdev_io": 0, 00:18:12.279 "completed_nvme_io": 0, 00:18:12.279 "transports": [ 00:18:12.279 { 00:18:12.279 "trtype": "TCP" 00:18:12.279 } 00:18:12.279 ] 00:18:12.279 } 00:18:12.279 ] 00:18:12.279 }' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.279 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 Malloc1 00:18:12.538 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:12.538 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.538 15:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 [2024-11-06 15:22:40.023601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:12.538 [2024-11-06 15:22:40.053124] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:18:12.538 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:12.538 could not add new controller: failed to write to nvme-fabrics device 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.538 15:22:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:13.917 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:13.917 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:13.917 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.917 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:13.917 15:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:15.819 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.078 [2024-11-06 15:22:43.565251] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:18:16.078 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:16.078 could not add new controller: failed to write to nvme-fabrics device 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.078 15:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.455 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.455 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:17.455 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.455 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:17.455 15:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:19.357 15:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.616 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.616 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:19.616 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:19.616 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.616 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 [2024-11-06 15:22:47.155910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.617 15:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:20.993 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.993 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:20.993 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.993 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:20.993 15:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:22.896 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 [2024-11-06 15:22:50.649065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.155 15:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:24.532 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:24.532 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:24.532 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.532 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:24.532 15:22:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:26.436 15:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 [2024-11-06 15:22:54.172437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.696 15:22:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:28.074 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:28.074 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:28.074 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.074 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:28.074 15:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:29.979 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 [2024-11-06 15:22:57.753181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.238 15:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:31.615 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:31.615 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:31.616 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.616 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:31.616 15:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:33.521 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:33.521 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:33.521 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:33.521 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:33.522 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.522 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:33.522 15:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:33.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 [2024-11-06 15:23:01.232985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.781 15:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.718 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.718 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:34.718 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.719 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:34.719 15:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:37.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 [2024-11-06 15:23:04.688324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.254 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 [2024-11-06 15:23:04.736407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 [2024-11-06 15:23:04.784593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 [2024-11-06 15:23:04.832737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 [2024-11-06 15:23:04.880907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.255 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:37.515 "tick_rate": 2100000000, 00:18:37.515 "poll_groups": [ 00:18:37.515 { 00:18:37.515 "name": "nvmf_tgt_poll_group_000", 00:18:37.515 "admin_qpairs": 2, 00:18:37.515 "io_qpairs": 168, 00:18:37.515 "current_admin_qpairs": 0, 00:18:37.515 "current_io_qpairs": 0, 00:18:37.515 "pending_bdev_io": 0, 00:18:37.515 "completed_nvme_io": 218, 00:18:37.515 "transports": [ 00:18:37.515 { 00:18:37.515 "trtype": "TCP" 00:18:37.515 } 00:18:37.515 ] 00:18:37.515 }, 00:18:37.515 { 00:18:37.515 "name": "nvmf_tgt_poll_group_001", 00:18:37.515 "admin_qpairs": 2, 00:18:37.515 "io_qpairs": 168, 00:18:37.515 "current_admin_qpairs": 0, 00:18:37.515 "current_io_qpairs": 0, 00:18:37.515 "pending_bdev_io": 0, 00:18:37.515 "completed_nvme_io": 218, 00:18:37.515 "transports": [ 00:18:37.515 { 00:18:37.515 "trtype": "TCP" 00:18:37.515 } 00:18:37.515 ] 00:18:37.515 }, 00:18:37.515 { 00:18:37.515 "name": "nvmf_tgt_poll_group_002", 00:18:37.515 "admin_qpairs": 1, 00:18:37.515 "io_qpairs": 168, 00:18:37.515 "current_admin_qpairs": 0, 00:18:37.515 "current_io_qpairs": 0, 00:18:37.515 "pending_bdev_io": 0, 00:18:37.515 "completed_nvme_io": 267, 00:18:37.515 "transports": [ 00:18:37.515 { 00:18:37.515 "trtype": "TCP" 00:18:37.515 } 00:18:37.515 ] 00:18:37.515 }, 00:18:37.515 { 00:18:37.515 "name": "nvmf_tgt_poll_group_003", 00:18:37.515 "admin_qpairs": 2, 00:18:37.515 "io_qpairs": 168, 00:18:37.515 "current_admin_qpairs": 0, 00:18:37.515 "current_io_qpairs": 0, 00:18:37.515 "pending_bdev_io": 0, 00:18:37.515 "completed_nvme_io": 319, 00:18:37.515 "transports": [ 00:18:37.515 { 00:18:37.515 "trtype": "TCP" 00:18:37.515 } 00:18:37.515 ] 00:18:37.515 } 00:18:37.515 ] 00:18:37.515 }' 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:37.515 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:37.516 15:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.516 rmmod nvme_tcp 00:18:37.516 rmmod nvme_fabrics 00:18:37.516 rmmod nvme_keyring 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3823453 ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3823453 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3823453 ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3823453 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3823453 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3823453' 00:18:37.516 killing process with pid 3823453 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3823453 00:18:37.516 15:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3823453 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.894 15:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.431 00:18:41.431 real 0m35.918s 00:18:41.431 user 1m49.490s 00:18:41.431 sys 0m6.724s 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.431 ************************************ 00:18:41.431 END TEST nvmf_rpc 00:18:41.431 ************************************ 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.431 ************************************ 00:18:41.431 START TEST nvmf_invalid 00:18:41.431 ************************************ 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:41.431 * Looking for test storage... 00:18:41.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:41.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.431 --rc genhtml_branch_coverage=1 00:18:41.431 --rc genhtml_function_coverage=1 00:18:41.431 --rc genhtml_legend=1 00:18:41.431 --rc geninfo_all_blocks=1 00:18:41.431 --rc geninfo_unexecuted_blocks=1 00:18:41.431 00:18:41.431 ' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:41.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.431 --rc genhtml_branch_coverage=1 00:18:41.431 --rc genhtml_function_coverage=1 00:18:41.431 --rc genhtml_legend=1 00:18:41.431 --rc geninfo_all_blocks=1 00:18:41.431 --rc geninfo_unexecuted_blocks=1 00:18:41.431 00:18:41.431 ' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:41.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.431 --rc genhtml_branch_coverage=1 00:18:41.431 --rc genhtml_function_coverage=1 00:18:41.431 --rc genhtml_legend=1 00:18:41.431 --rc geninfo_all_blocks=1 00:18:41.431 --rc geninfo_unexecuted_blocks=1 00:18:41.431 00:18:41.431 ' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:41.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.431 --rc genhtml_branch_coverage=1 00:18:41.431 --rc genhtml_function_coverage=1 00:18:41.431 --rc genhtml_legend=1 00:18:41.431 --rc geninfo_all_blocks=1 00:18:41.431 --rc geninfo_unexecuted_blocks=1 00:18:41.431 00:18:41.431 ' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.431 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.432 15:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:48.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:48.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.004 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:48.005 Found net devices under 0000:86:00.0: cvl_0_0 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:48.005 Found net devices under 0000:86:00.1: cvl_0_1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:48.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:18:48.005 00:18:48.005 --- 10.0.0.2 ping statistics --- 00:18:48.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.005 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:18:48.005 00:18:48.005 --- 10.0.0.1 ping statistics --- 00:18:48.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.005 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3831697 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3831697 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3831697 ']' 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.005 15:23:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:48.005 [2024-11-06 15:23:14.846653] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:48.005 [2024-11-06 15:23:14.846737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.006 [2024-11-06 15:23:14.977045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.006 [2024-11-06 15:23:15.079533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.006 [2024-11-06 15:23:15.079580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.006 [2024-11-06 15:23:15.079590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.006 [2024-11-06 15:23:15.079603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.006 [2024-11-06 15:23:15.079611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.006 [2024-11-06 15:23:15.081993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.006 [2024-11-06 15:23:15.082073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.006 [2024-11-06 15:23:15.082148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.006 [2024-11-06 15:23:15.082170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6672 00:18:48.265 [2024-11-06 15:23:15.860790] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:48.265 { 00:18:48.265 "nqn": "nqn.2016-06.io.spdk:cnode6672", 00:18:48.265 "tgt_name": "foobar", 00:18:48.265 "method": "nvmf_create_subsystem", 00:18:48.265 "req_id": 1 00:18:48.265 } 00:18:48.265 Got JSON-RPC error response 00:18:48.265 response: 00:18:48.265 { 00:18:48.265 "code": -32603, 00:18:48.265 "message": "Unable to find target foobar" 00:18:48.265 }' 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:48.265 { 00:18:48.265 "nqn": "nqn.2016-06.io.spdk:cnode6672", 00:18:48.265 "tgt_name": "foobar", 00:18:48.265 "method": "nvmf_create_subsystem", 00:18:48.265 "req_id": 1 00:18:48.265 } 00:18:48.265 Got JSON-RPC error response 00:18:48.265 response: 00:18:48.265 { 00:18:48.265 "code": -32603, 00:18:48.265 "message": "Unable to find target foobar" 00:18:48.265 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:48.265 15:23:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24243 00:18:48.524 [2024-11-06 15:23:16.073562] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24243: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:48.524 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:48.524 { 00:18:48.524 "nqn": "nqn.2016-06.io.spdk:cnode24243", 00:18:48.524 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:48.524 "method": "nvmf_create_subsystem", 00:18:48.524 "req_id": 1 00:18:48.524 } 00:18:48.524 Got JSON-RPC error response 00:18:48.524 response: 00:18:48.524 { 00:18:48.524 "code": -32602, 00:18:48.524 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:48.524 }' 00:18:48.524 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:48.524 { 00:18:48.524 "nqn": "nqn.2016-06.io.spdk:cnode24243", 00:18:48.524 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:48.524 "method": "nvmf_create_subsystem", 00:18:48.524 "req_id": 1 00:18:48.524 } 00:18:48.524 Got JSON-RPC error response 00:18:48.524 response: 00:18:48.524 { 00:18:48.524 "code": -32602, 00:18:48.524 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:48.524 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:48.524 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:48.524 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22549 00:18:48.784 [2024-11-06 15:23:16.282254] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22549: invalid model number 'SPDK_Controller' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:48.784 { 00:18:48.784 "nqn": "nqn.2016-06.io.spdk:cnode22549", 00:18:48.784 "model_number": "SPDK_Controller\u001f", 00:18:48.784 "method": "nvmf_create_subsystem", 00:18:48.784 "req_id": 1 00:18:48.784 } 00:18:48.784 Got JSON-RPC error response 00:18:48.784 response: 00:18:48.784 { 00:18:48.784 "code": -32602, 00:18:48.784 "message": "Invalid MN SPDK_Controller\u001f" 00:18:48.784 }' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:48.784 { 00:18:48.784 "nqn": "nqn.2016-06.io.spdk:cnode22549", 00:18:48.784 "model_number": "SPDK_Controller\u001f", 00:18:48.784 "method": "nvmf_create_subsystem", 00:18:48.784 "req_id": 1 00:18:48.784 } 00:18:48.784 Got JSON-RPC error response 00:18:48.784 response: 00:18:48.784 { 00:18:48.784 "code": -32602, 00:18:48.784 "message": "Invalid MN SPDK_Controller\u001f" 00:18:48.784 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:48.784 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:48.785 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fzuCsloJXwL`"k0/E,MEz' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'fzuCsloJXwL`"k0/E,MEz' nqn.2016-06.io.spdk:cnode24336 00:18:49.044 [2024-11-06 15:23:16.619434] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24336: invalid serial number 'fzuCsloJXwL`"k0/E,MEz' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:49.044 { 00:18:49.044 "nqn": "nqn.2016-06.io.spdk:cnode24336", 00:18:49.044 "serial_number": "fzuCsloJXwL`\"k0/E,MEz", 00:18:49.044 "method": "nvmf_create_subsystem", 00:18:49.044 "req_id": 1 00:18:49.044 } 00:18:49.044 Got JSON-RPC error response 00:18:49.044 response: 00:18:49.044 { 00:18:49.044 "code": -32602, 00:18:49.044 "message": "Invalid SN fzuCsloJXwL`\"k0/E,MEz" 00:18:49.044 }' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:49.044 { 00:18:49.044 "nqn": "nqn.2016-06.io.spdk:cnode24336", 00:18:49.044 "serial_number": "fzuCsloJXwL`\"k0/E,MEz", 00:18:49.044 "method": "nvmf_create_subsystem", 00:18:49.044 "req_id": 1 00:18:49.044 } 00:18:49.044 Got JSON-RPC error response 00:18:49.044 response: 00:18:49.044 { 00:18:49.044 "code": -32602, 00:18:49.044 "message": "Invalid SN fzuCsloJXwL`\"k0/E,MEz" 00:18:49.044 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:49.044 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:49.045 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:49.305 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:18:49.306 15:23:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=96d@^xhMlB1z1 /dev/null' 00:18:52.757 15:23:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:54.663 00:18:54.663 real 0m13.584s 00:18:54.663 user 0m23.794s 00:18:54.663 sys 0m5.541s 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:54.663 ************************************ 00:18:54.663 END TEST nvmf_invalid 00:18:54.663 ************************************ 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.663 ************************************ 00:18:54.663 START TEST nvmf_connect_stress 00:18:54.663 ************************************ 00:18:54.663 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:54.923 * Looking for test storage... 00:18:54.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.923 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.924 --rc genhtml_branch_coverage=1 00:18:54.924 --rc genhtml_function_coverage=1 00:18:54.924 --rc genhtml_legend=1 00:18:54.924 --rc geninfo_all_blocks=1 00:18:54.924 --rc geninfo_unexecuted_blocks=1 00:18:54.924 00:18:54.924 ' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.924 --rc genhtml_branch_coverage=1 00:18:54.924 --rc genhtml_function_coverage=1 00:18:54.924 --rc genhtml_legend=1 00:18:54.924 --rc geninfo_all_blocks=1 00:18:54.924 --rc geninfo_unexecuted_blocks=1 00:18:54.924 00:18:54.924 ' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.924 --rc genhtml_branch_coverage=1 00:18:54.924 --rc genhtml_function_coverage=1 00:18:54.924 --rc genhtml_legend=1 00:18:54.924 --rc geninfo_all_blocks=1 00:18:54.924 --rc geninfo_unexecuted_blocks=1 00:18:54.924 00:18:54.924 ' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:54.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.924 --rc genhtml_branch_coverage=1 00:18:54.924 --rc genhtml_function_coverage=1 00:18:54.924 --rc genhtml_legend=1 00:18:54.924 --rc geninfo_all_blocks=1 00:18:54.924 --rc geninfo_unexecuted_blocks=1 00:18:54.924 00:18:54.924 ' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.924 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.925 15:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.496 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:01.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:01.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:01.497 Found net devices under 0000:86:00.0: cvl_0_0 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:01.497 Found net devices under 0000:86:00.1: cvl_0_1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:19:01.497 00:19:01.497 --- 10.0.0.2 ping statistics --- 00:19:01.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.497 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:19:01.497 00:19:01.497 --- 10.0.0.1 ping statistics --- 00:19:01.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.497 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.497 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3836106 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3836106 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3836106 ']' 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.498 15:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.498 [2024-11-06 15:23:28.524175] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:01.498 [2024-11-06 15:23:28.524270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.498 [2024-11-06 15:23:28.652644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.498 [2024-11-06 15:23:28.761557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.498 [2024-11-06 15:23:28.761601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.498 [2024-11-06 15:23:28.761611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.498 [2024-11-06 15:23:28.761620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.498 [2024-11-06 15:23:28.761628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.498 [2024-11-06 15:23:28.764074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.498 [2024-11-06 15:23:28.764142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.498 [2024-11-06 15:23:28.764163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.779 [2024-11-06 15:23:29.373884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.779 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.779 [2024-11-06 15:23:29.395729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.068 NULL1 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3836350 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.068 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.338 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.338 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:02.338 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.338 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.338 15:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.630 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.630 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:02.630 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.630 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.630 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.901 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.901 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:02.901 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.901 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.901 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.469 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.469 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:03.469 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.469 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.469 15:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.729 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.729 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:03.729 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.729 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.729 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.988 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.988 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:03.988 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.988 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.988 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.247 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.247 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:04.247 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.247 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.247 15:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.506 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.506 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:04.506 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.506 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.506 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.073 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.073 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:05.073 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.073 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.073 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.332 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.332 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:05.332 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.332 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.332 15:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.591 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.591 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:05.591 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.591 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.591 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.849 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:05.849 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.849 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.849 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.416 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.416 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:06.416 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.416 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.416 15:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.675 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.675 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:06.675 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.675 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.675 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.933 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.933 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:06.933 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.933 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.933 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.191 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.191 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:07.191 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:07.191 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.191 15:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.450 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.450 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:07.450 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:07.450 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.450 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.017 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.017 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:08.017 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.017 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.017 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.275 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.275 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:08.275 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.275 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.275 15:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.533 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.533 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:08.533 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.533 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.533 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:08.792 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.792 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:08.792 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:08.792 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.792 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.359 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.359 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:09.359 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.359 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.359 15:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.617 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.617 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:09.617 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.617 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.617 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:09.876 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.876 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:09.876 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:09.876 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.876 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:10.135 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.135 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:10.135 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:10.135 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.135 15:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:10.703 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.703 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:10.703 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:10.703 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.703 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:10.961 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.961 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:10.961 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:10.961 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.961 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.220 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.220 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:11.220 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.220 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.220 15:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.478 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.478 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:11.478 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.478 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.478 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.737 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.737 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:11.737 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:11.737 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.737 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:11.996 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3836350 00:19:12.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3836350) - No such process 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3836350 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.253 rmmod nvme_tcp 00:19:12.253 rmmod nvme_fabrics 00:19:12.253 rmmod nvme_keyring 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3836106 ']' 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3836106 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3836106 ']' 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3836106 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3836106 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3836106' 00:19:12.253 killing process with pid 3836106 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3836106 00:19:12.253 15:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3836106 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.631 15:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.537 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:15.537 00:19:15.537 real 0m20.735s 00:19:15.537 user 0m43.784s 00:19:15.537 sys 0m8.471s 00:19:15.537 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:15.537 15:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:15.537 ************************************ 00:19:15.537 END TEST nvmf_connect_stress 00:19:15.537 ************************************ 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.537 ************************************ 00:19:15.537 START TEST nvmf_fused_ordering 00:19:15.537 ************************************ 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:15.537 * Looking for test storage... 00:19:15.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:19:15.537 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.797 --rc genhtml_branch_coverage=1 00:19:15.797 --rc genhtml_function_coverage=1 00:19:15.797 --rc genhtml_legend=1 00:19:15.797 --rc geninfo_all_blocks=1 00:19:15.797 --rc geninfo_unexecuted_blocks=1 00:19:15.797 00:19:15.797 ' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.797 --rc genhtml_branch_coverage=1 00:19:15.797 --rc genhtml_function_coverage=1 00:19:15.797 --rc genhtml_legend=1 00:19:15.797 --rc geninfo_all_blocks=1 00:19:15.797 --rc geninfo_unexecuted_blocks=1 00:19:15.797 00:19:15.797 ' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.797 --rc genhtml_branch_coverage=1 00:19:15.797 --rc genhtml_function_coverage=1 00:19:15.797 --rc genhtml_legend=1 00:19:15.797 --rc geninfo_all_blocks=1 00:19:15.797 --rc geninfo_unexecuted_blocks=1 00:19:15.797 00:19:15.797 ' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:15.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.797 --rc genhtml_branch_coverage=1 00:19:15.797 --rc genhtml_function_coverage=1 00:19:15.797 --rc genhtml_legend=1 00:19:15.797 --rc geninfo_all_blocks=1 00:19:15.797 --rc geninfo_unexecuted_blocks=1 00:19:15.797 00:19:15.797 ' 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.797 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.798 15:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.369 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:22.370 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:22.370 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:22.370 Found net devices under 0000:86:00.0: cvl_0_0 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:22.370 Found net devices under 0000:86:00.1: cvl_0_1 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.370 15:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:22.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:19:22.370 00:19:22.370 --- 10.0.0.2 ping statistics --- 00:19:22.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.370 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:22.370 00:19:22.370 --- 10.0.0.1 ping statistics --- 00:19:22.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.370 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.370 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3841756 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3841756 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3841756 ']' 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:22.371 15:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.371 [2024-11-06 15:23:49.322408] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:22.371 [2024-11-06 15:23:49.322503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.371 [2024-11-06 15:23:49.434800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.371 [2024-11-06 15:23:49.542493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.371 [2024-11-06 15:23:49.542533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.371 [2024-11-06 15:23:49.542543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.371 [2024-11-06 15:23:49.542568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.371 [2024-11-06 15:23:49.542581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.371 [2024-11-06 15:23:49.543881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 [2024-11-06 15:23:50.183971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 [2024-11-06 15:23:50.204163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 NULL1 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.630 15:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:22.889 [2024-11-06 15:23:50.279714] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:22.889 [2024-11-06 15:23:50.279771] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841942 ] 00:19:23.148 Attached to nqn.2016-06.io.spdk:cnode1 00:19:23.148 Namespace ID: 1 size: 1GB 00:19:23.148 fused_ordering(0) 00:19:23.148 fused_ordering(1) 00:19:23.148 fused_ordering(2) 00:19:23.148 fused_ordering(3) 00:19:23.148 fused_ordering(4) 00:19:23.148 fused_ordering(5) 00:19:23.148 fused_ordering(6) 00:19:23.148 fused_ordering(7) 00:19:23.148 fused_ordering(8) 00:19:23.148 fused_ordering(9) 00:19:23.148 fused_ordering(10) 00:19:23.148 fused_ordering(11) 00:19:23.148 fused_ordering(12) 00:19:23.148 fused_ordering(13) 00:19:23.148 fused_ordering(14) 00:19:23.148 fused_ordering(15) 00:19:23.148 fused_ordering(16) 00:19:23.148 fused_ordering(17) 00:19:23.148 fused_ordering(18) 00:19:23.148 fused_ordering(19) 00:19:23.148 fused_ordering(20) 00:19:23.148 fused_ordering(21) 00:19:23.148 fused_ordering(22) 00:19:23.148 fused_ordering(23) 00:19:23.148 fused_ordering(24) 00:19:23.148 fused_ordering(25) 00:19:23.148 fused_ordering(26) 00:19:23.148 fused_ordering(27) 00:19:23.148 fused_ordering(28) 00:19:23.148 fused_ordering(29) 00:19:23.148 fused_ordering(30) 00:19:23.148 fused_ordering(31) 00:19:23.148 fused_ordering(32) 00:19:23.148 fused_ordering(33) 00:19:23.148 fused_ordering(34) 00:19:23.148 fused_ordering(35) 00:19:23.148 fused_ordering(36) 00:19:23.148 fused_ordering(37) 00:19:23.148 fused_ordering(38) 00:19:23.148 fused_ordering(39) 00:19:23.148 fused_ordering(40) 00:19:23.148 fused_ordering(41) 00:19:23.148 fused_ordering(42) 00:19:23.148 fused_ordering(43) 00:19:23.148 fused_ordering(44) 00:19:23.148 fused_ordering(45) 00:19:23.148 fused_ordering(46) 00:19:23.148 fused_ordering(47) 00:19:23.148 fused_ordering(48) 00:19:23.148 fused_ordering(49) 00:19:23.148 fused_ordering(50) 00:19:23.148 fused_ordering(51) 00:19:23.148 fused_ordering(52) 00:19:23.148 fused_ordering(53) 00:19:23.148 fused_ordering(54) 00:19:23.148 fused_ordering(55) 00:19:23.148 fused_ordering(56) 00:19:23.148 fused_ordering(57) 00:19:23.148 fused_ordering(58) 00:19:23.148 fused_ordering(59) 00:19:23.148 fused_ordering(60) 00:19:23.148 fused_ordering(61) 00:19:23.148 fused_ordering(62) 00:19:23.148 fused_ordering(63) 00:19:23.148 fused_ordering(64) 00:19:23.148 fused_ordering(65) 00:19:23.148 fused_ordering(66) 00:19:23.148 fused_ordering(67) 00:19:23.148 fused_ordering(68) 00:19:23.148 fused_ordering(69) 00:19:23.148 fused_ordering(70) 00:19:23.148 fused_ordering(71) 00:19:23.148 fused_ordering(72) 00:19:23.148 fused_ordering(73) 00:19:23.148 fused_ordering(74) 00:19:23.148 fused_ordering(75) 00:19:23.148 fused_ordering(76) 00:19:23.148 fused_ordering(77) 00:19:23.148 fused_ordering(78) 00:19:23.148 fused_ordering(79) 00:19:23.148 fused_ordering(80) 00:19:23.148 fused_ordering(81) 00:19:23.148 fused_ordering(82) 00:19:23.148 fused_ordering(83) 00:19:23.148 fused_ordering(84) 00:19:23.148 fused_ordering(85) 00:19:23.148 fused_ordering(86) 00:19:23.148 fused_ordering(87) 00:19:23.148 fused_ordering(88) 00:19:23.148 fused_ordering(89) 00:19:23.148 fused_ordering(90) 00:19:23.148 fused_ordering(91) 00:19:23.148 fused_ordering(92) 00:19:23.148 fused_ordering(93) 00:19:23.148 fused_ordering(94) 00:19:23.148 fused_ordering(95) 00:19:23.148 fused_ordering(96) 00:19:23.148 fused_ordering(97) 00:19:23.148 fused_ordering(98) 00:19:23.148 fused_ordering(99) 00:19:23.148 fused_ordering(100) 00:19:23.148 fused_ordering(101) 00:19:23.148 fused_ordering(102) 00:19:23.148 fused_ordering(103) 00:19:23.148 fused_ordering(104) 00:19:23.148 fused_ordering(105) 00:19:23.148 fused_ordering(106) 00:19:23.148 fused_ordering(107) 00:19:23.148 fused_ordering(108) 00:19:23.148 fused_ordering(109) 00:19:23.148 fused_ordering(110) 00:19:23.148 fused_ordering(111) 00:19:23.148 fused_ordering(112) 00:19:23.148 fused_ordering(113) 00:19:23.148 fused_ordering(114) 00:19:23.148 fused_ordering(115) 00:19:23.148 fused_ordering(116) 00:19:23.148 fused_ordering(117) 00:19:23.148 fused_ordering(118) 00:19:23.148 fused_ordering(119) 00:19:23.148 fused_ordering(120) 00:19:23.148 fused_ordering(121) 00:19:23.148 fused_ordering(122) 00:19:23.148 fused_ordering(123) 00:19:23.148 fused_ordering(124) 00:19:23.148 fused_ordering(125) 00:19:23.148 fused_ordering(126) 00:19:23.148 fused_ordering(127) 00:19:23.148 fused_ordering(128) 00:19:23.148 fused_ordering(129) 00:19:23.148 fused_ordering(130) 00:19:23.148 fused_ordering(131) 00:19:23.148 fused_ordering(132) 00:19:23.148 fused_ordering(133) 00:19:23.148 fused_ordering(134) 00:19:23.148 fused_ordering(135) 00:19:23.148 fused_ordering(136) 00:19:23.148 fused_ordering(137) 00:19:23.148 fused_ordering(138) 00:19:23.148 fused_ordering(139) 00:19:23.148 fused_ordering(140) 00:19:23.148 fused_ordering(141) 00:19:23.148 fused_ordering(142) 00:19:23.148 fused_ordering(143) 00:19:23.148 fused_ordering(144) 00:19:23.148 fused_ordering(145) 00:19:23.148 fused_ordering(146) 00:19:23.148 fused_ordering(147) 00:19:23.148 fused_ordering(148) 00:19:23.148 fused_ordering(149) 00:19:23.148 fused_ordering(150) 00:19:23.148 fused_ordering(151) 00:19:23.148 fused_ordering(152) 00:19:23.148 fused_ordering(153) 00:19:23.148 fused_ordering(154) 00:19:23.148 fused_ordering(155) 00:19:23.148 fused_ordering(156) 00:19:23.148 fused_ordering(157) 00:19:23.148 fused_ordering(158) 00:19:23.148 fused_ordering(159) 00:19:23.148 fused_ordering(160) 00:19:23.148 fused_ordering(161) 00:19:23.148 fused_ordering(162) 00:19:23.148 fused_ordering(163) 00:19:23.148 fused_ordering(164) 00:19:23.148 fused_ordering(165) 00:19:23.148 fused_ordering(166) 00:19:23.148 fused_ordering(167) 00:19:23.148 fused_ordering(168) 00:19:23.148 fused_ordering(169) 00:19:23.148 fused_ordering(170) 00:19:23.148 fused_ordering(171) 00:19:23.148 fused_ordering(172) 00:19:23.148 fused_ordering(173) 00:19:23.148 fused_ordering(174) 00:19:23.148 fused_ordering(175) 00:19:23.148 fused_ordering(176) 00:19:23.148 fused_ordering(177) 00:19:23.148 fused_ordering(178) 00:19:23.148 fused_ordering(179) 00:19:23.148 fused_ordering(180) 00:19:23.148 fused_ordering(181) 00:19:23.148 fused_ordering(182) 00:19:23.148 fused_ordering(183) 00:19:23.148 fused_ordering(184) 00:19:23.148 fused_ordering(185) 00:19:23.148 fused_ordering(186) 00:19:23.148 fused_ordering(187) 00:19:23.148 fused_ordering(188) 00:19:23.148 fused_ordering(189) 00:19:23.148 fused_ordering(190) 00:19:23.148 fused_ordering(191) 00:19:23.148 fused_ordering(192) 00:19:23.148 fused_ordering(193) 00:19:23.148 fused_ordering(194) 00:19:23.148 fused_ordering(195) 00:19:23.148 fused_ordering(196) 00:19:23.148 fused_ordering(197) 00:19:23.148 fused_ordering(198) 00:19:23.148 fused_ordering(199) 00:19:23.148 fused_ordering(200) 00:19:23.148 fused_ordering(201) 00:19:23.148 fused_ordering(202) 00:19:23.148 fused_ordering(203) 00:19:23.148 fused_ordering(204) 00:19:23.148 fused_ordering(205) 00:19:23.408 fused_ordering(206) 00:19:23.408 fused_ordering(207) 00:19:23.408 fused_ordering(208) 00:19:23.408 fused_ordering(209) 00:19:23.408 fused_ordering(210) 00:19:23.408 fused_ordering(211) 00:19:23.408 fused_ordering(212) 00:19:23.408 fused_ordering(213) 00:19:23.408 fused_ordering(214) 00:19:23.408 fused_ordering(215) 00:19:23.408 fused_ordering(216) 00:19:23.408 fused_ordering(217) 00:19:23.408 fused_ordering(218) 00:19:23.408 fused_ordering(219) 00:19:23.408 fused_ordering(220) 00:19:23.408 fused_ordering(221) 00:19:23.408 fused_ordering(222) 00:19:23.408 fused_ordering(223) 00:19:23.408 fused_ordering(224) 00:19:23.408 fused_ordering(225) 00:19:23.408 fused_ordering(226) 00:19:23.408 fused_ordering(227) 00:19:23.408 fused_ordering(228) 00:19:23.408 fused_ordering(229) 00:19:23.408 fused_ordering(230) 00:19:23.408 fused_ordering(231) 00:19:23.408 fused_ordering(232) 00:19:23.408 fused_ordering(233) 00:19:23.408 fused_ordering(234) 00:19:23.408 fused_ordering(235) 00:19:23.408 fused_ordering(236) 00:19:23.408 fused_ordering(237) 00:19:23.408 fused_ordering(238) 00:19:23.408 fused_ordering(239) 00:19:23.408 fused_ordering(240) 00:19:23.408 fused_ordering(241) 00:19:23.408 fused_ordering(242) 00:19:23.408 fused_ordering(243) 00:19:23.408 fused_ordering(244) 00:19:23.408 fused_ordering(245) 00:19:23.408 fused_ordering(246) 00:19:23.408 fused_ordering(247) 00:19:23.408 fused_ordering(248) 00:19:23.408 fused_ordering(249) 00:19:23.408 fused_ordering(250) 00:19:23.408 fused_ordering(251) 00:19:23.408 fused_ordering(252) 00:19:23.408 fused_ordering(253) 00:19:23.408 fused_ordering(254) 00:19:23.408 fused_ordering(255) 00:19:23.408 fused_ordering(256) 00:19:23.408 fused_ordering(257) 00:19:23.408 fused_ordering(258) 00:19:23.408 fused_ordering(259) 00:19:23.408 fused_ordering(260) 00:19:23.408 fused_ordering(261) 00:19:23.408 fused_ordering(262) 00:19:23.408 fused_ordering(263) 00:19:23.408 fused_ordering(264) 00:19:23.408 fused_ordering(265) 00:19:23.408 fused_ordering(266) 00:19:23.408 fused_ordering(267) 00:19:23.408 fused_ordering(268) 00:19:23.408 fused_ordering(269) 00:19:23.408 fused_ordering(270) 00:19:23.408 fused_ordering(271) 00:19:23.408 fused_ordering(272) 00:19:23.408 fused_ordering(273) 00:19:23.408 fused_ordering(274) 00:19:23.408 fused_ordering(275) 00:19:23.408 fused_ordering(276) 00:19:23.408 fused_ordering(277) 00:19:23.408 fused_ordering(278) 00:19:23.408 fused_ordering(279) 00:19:23.408 fused_ordering(280) 00:19:23.408 fused_ordering(281) 00:19:23.408 fused_ordering(282) 00:19:23.408 fused_ordering(283) 00:19:23.408 fused_ordering(284) 00:19:23.408 fused_ordering(285) 00:19:23.408 fused_ordering(286) 00:19:23.408 fused_ordering(287) 00:19:23.408 fused_ordering(288) 00:19:23.408 fused_ordering(289) 00:19:23.408 fused_ordering(290) 00:19:23.408 fused_ordering(291) 00:19:23.408 fused_ordering(292) 00:19:23.408 fused_ordering(293) 00:19:23.408 fused_ordering(294) 00:19:23.408 fused_ordering(295) 00:19:23.408 fused_ordering(296) 00:19:23.408 fused_ordering(297) 00:19:23.408 fused_ordering(298) 00:19:23.408 fused_ordering(299) 00:19:23.408 fused_ordering(300) 00:19:23.408 fused_ordering(301) 00:19:23.408 fused_ordering(302) 00:19:23.408 fused_ordering(303) 00:19:23.408 fused_ordering(304) 00:19:23.408 fused_ordering(305) 00:19:23.408 fused_ordering(306) 00:19:23.408 fused_ordering(307) 00:19:23.408 fused_ordering(308) 00:19:23.408 fused_ordering(309) 00:19:23.408 fused_ordering(310) 00:19:23.408 fused_ordering(311) 00:19:23.408 fused_ordering(312) 00:19:23.408 fused_ordering(313) 00:19:23.408 fused_ordering(314) 00:19:23.408 fused_ordering(315) 00:19:23.408 fused_ordering(316) 00:19:23.408 fused_ordering(317) 00:19:23.408 fused_ordering(318) 00:19:23.408 fused_ordering(319) 00:19:23.408 fused_ordering(320) 00:19:23.408 fused_ordering(321) 00:19:23.408 fused_ordering(322) 00:19:23.408 fused_ordering(323) 00:19:23.408 fused_ordering(324) 00:19:23.408 fused_ordering(325) 00:19:23.408 fused_ordering(326) 00:19:23.408 fused_ordering(327) 00:19:23.408 fused_ordering(328) 00:19:23.408 fused_ordering(329) 00:19:23.408 fused_ordering(330) 00:19:23.408 fused_ordering(331) 00:19:23.408 fused_ordering(332) 00:19:23.408 fused_ordering(333) 00:19:23.408 fused_ordering(334) 00:19:23.408 fused_ordering(335) 00:19:23.408 fused_ordering(336) 00:19:23.408 fused_ordering(337) 00:19:23.408 fused_ordering(338) 00:19:23.408 fused_ordering(339) 00:19:23.408 fused_ordering(340) 00:19:23.408 fused_ordering(341) 00:19:23.408 fused_ordering(342) 00:19:23.408 fused_ordering(343) 00:19:23.408 fused_ordering(344) 00:19:23.408 fused_ordering(345) 00:19:23.408 fused_ordering(346) 00:19:23.408 fused_ordering(347) 00:19:23.408 fused_ordering(348) 00:19:23.408 fused_ordering(349) 00:19:23.408 fused_ordering(350) 00:19:23.408 fused_ordering(351) 00:19:23.408 fused_ordering(352) 00:19:23.408 fused_ordering(353) 00:19:23.408 fused_ordering(354) 00:19:23.408 fused_ordering(355) 00:19:23.408 fused_ordering(356) 00:19:23.408 fused_ordering(357) 00:19:23.408 fused_ordering(358) 00:19:23.408 fused_ordering(359) 00:19:23.408 fused_ordering(360) 00:19:23.408 fused_ordering(361) 00:19:23.408 fused_ordering(362) 00:19:23.408 fused_ordering(363) 00:19:23.408 fused_ordering(364) 00:19:23.408 fused_ordering(365) 00:19:23.408 fused_ordering(366) 00:19:23.408 fused_ordering(367) 00:19:23.408 fused_ordering(368) 00:19:23.408 fused_ordering(369) 00:19:23.408 fused_ordering(370) 00:19:23.408 fused_ordering(371) 00:19:23.408 fused_ordering(372) 00:19:23.408 fused_ordering(373) 00:19:23.408 fused_ordering(374) 00:19:23.408 fused_ordering(375) 00:19:23.408 fused_ordering(376) 00:19:23.408 fused_ordering(377) 00:19:23.408 fused_ordering(378) 00:19:23.408 fused_ordering(379) 00:19:23.408 fused_ordering(380) 00:19:23.408 fused_ordering(381) 00:19:23.408 fused_ordering(382) 00:19:23.408 fused_ordering(383) 00:19:23.408 fused_ordering(384) 00:19:23.408 fused_ordering(385) 00:19:23.408 fused_ordering(386) 00:19:23.408 fused_ordering(387) 00:19:23.408 fused_ordering(388) 00:19:23.408 fused_ordering(389) 00:19:23.408 fused_ordering(390) 00:19:23.408 fused_ordering(391) 00:19:23.408 fused_ordering(392) 00:19:23.408 fused_ordering(393) 00:19:23.408 fused_ordering(394) 00:19:23.408 fused_ordering(395) 00:19:23.408 fused_ordering(396) 00:19:23.408 fused_ordering(397) 00:19:23.408 fused_ordering(398) 00:19:23.408 fused_ordering(399) 00:19:23.408 fused_ordering(400) 00:19:23.408 fused_ordering(401) 00:19:23.408 fused_ordering(402) 00:19:23.408 fused_ordering(403) 00:19:23.408 fused_ordering(404) 00:19:23.408 fused_ordering(405) 00:19:23.408 fused_ordering(406) 00:19:23.408 fused_ordering(407) 00:19:23.408 fused_ordering(408) 00:19:23.408 fused_ordering(409) 00:19:23.408 fused_ordering(410) 00:19:23.976 fused_ordering(411) 00:19:23.976 fused_ordering(412) 00:19:23.976 fused_ordering(413) 00:19:23.976 fused_ordering(414) 00:19:23.976 fused_ordering(415) 00:19:23.976 fused_ordering(416) 00:19:23.976 fused_ordering(417) 00:19:23.976 fused_ordering(418) 00:19:23.976 fused_ordering(419) 00:19:23.976 fused_ordering(420) 00:19:23.976 fused_ordering(421) 00:19:23.976 fused_ordering(422) 00:19:23.976 fused_ordering(423) 00:19:23.976 fused_ordering(424) 00:19:23.976 fused_ordering(425) 00:19:23.976 fused_ordering(426) 00:19:23.976 fused_ordering(427) 00:19:23.976 fused_ordering(428) 00:19:23.976 fused_ordering(429) 00:19:23.976 fused_ordering(430) 00:19:23.976 fused_ordering(431) 00:19:23.976 fused_ordering(432) 00:19:23.976 fused_ordering(433) 00:19:23.976 fused_ordering(434) 00:19:23.976 fused_ordering(435) 00:19:23.976 fused_ordering(436) 00:19:23.976 fused_ordering(437) 00:19:23.976 fused_ordering(438) 00:19:23.976 fused_ordering(439) 00:19:23.976 fused_ordering(440) 00:19:23.976 fused_ordering(441) 00:19:23.976 fused_ordering(442) 00:19:23.976 fused_ordering(443) 00:19:23.976 fused_ordering(444) 00:19:23.976 fused_ordering(445) 00:19:23.976 fused_ordering(446) 00:19:23.976 fused_ordering(447) 00:19:23.976 fused_ordering(448) 00:19:23.976 fused_ordering(449) 00:19:23.976 fused_ordering(450) 00:19:23.976 fused_ordering(451) 00:19:23.976 fused_ordering(452) 00:19:23.976 fused_ordering(453) 00:19:23.976 fused_ordering(454) 00:19:23.976 fused_ordering(455) 00:19:23.976 fused_ordering(456) 00:19:23.976 fused_ordering(457) 00:19:23.976 fused_ordering(458) 00:19:23.976 fused_ordering(459) 00:19:23.976 fused_ordering(460) 00:19:23.976 fused_ordering(461) 00:19:23.976 fused_ordering(462) 00:19:23.976 fused_ordering(463) 00:19:23.976 fused_ordering(464) 00:19:23.976 fused_ordering(465) 00:19:23.976 fused_ordering(466) 00:19:23.976 fused_ordering(467) 00:19:23.976 fused_ordering(468) 00:19:23.976 fused_ordering(469) 00:19:23.976 fused_ordering(470) 00:19:23.976 fused_ordering(471) 00:19:23.976 fused_ordering(472) 00:19:23.976 fused_ordering(473) 00:19:23.976 fused_ordering(474) 00:19:23.976 fused_ordering(475) 00:19:23.976 fused_ordering(476) 00:19:23.976 fused_ordering(477) 00:19:23.976 fused_ordering(478) 00:19:23.976 fused_ordering(479) 00:19:23.976 fused_ordering(480) 00:19:23.976 fused_ordering(481) 00:19:23.976 fused_ordering(482) 00:19:23.976 fused_ordering(483) 00:19:23.976 fused_ordering(484) 00:19:23.976 fused_ordering(485) 00:19:23.976 fused_ordering(486) 00:19:23.976 fused_ordering(487) 00:19:23.976 fused_ordering(488) 00:19:23.976 fused_ordering(489) 00:19:23.976 fused_ordering(490) 00:19:23.976 fused_ordering(491) 00:19:23.976 fused_ordering(492) 00:19:23.977 fused_ordering(493) 00:19:23.977 fused_ordering(494) 00:19:23.977 fused_ordering(495) 00:19:23.977 fused_ordering(496) 00:19:23.977 fused_ordering(497) 00:19:23.977 fused_ordering(498) 00:19:23.977 fused_ordering(499) 00:19:23.977 fused_ordering(500) 00:19:23.977 fused_ordering(501) 00:19:23.977 fused_ordering(502) 00:19:23.977 fused_ordering(503) 00:19:23.977 fused_ordering(504) 00:19:23.977 fused_ordering(505) 00:19:23.977 fused_ordering(506) 00:19:23.977 fused_ordering(507) 00:19:23.977 fused_ordering(508) 00:19:23.977 fused_ordering(509) 00:19:23.977 fused_ordering(510) 00:19:23.977 fused_ordering(511) 00:19:23.977 fused_ordering(512) 00:19:23.977 fused_ordering(513) 00:19:23.977 fused_ordering(514) 00:19:23.977 fused_ordering(515) 00:19:23.977 fused_ordering(516) 00:19:23.977 fused_ordering(517) 00:19:23.977 fused_ordering(518) 00:19:23.977 fused_ordering(519) 00:19:23.977 fused_ordering(520) 00:19:23.977 fused_ordering(521) 00:19:23.977 fused_ordering(522) 00:19:23.977 fused_ordering(523) 00:19:23.977 fused_ordering(524) 00:19:23.977 fused_ordering(525) 00:19:23.977 fused_ordering(526) 00:19:23.977 fused_ordering(527) 00:19:23.977 fused_ordering(528) 00:19:23.977 fused_ordering(529) 00:19:23.977 fused_ordering(530) 00:19:23.977 fused_ordering(531) 00:19:23.977 fused_ordering(532) 00:19:23.977 fused_ordering(533) 00:19:23.977 fused_ordering(534) 00:19:23.977 fused_ordering(535) 00:19:23.977 fused_ordering(536) 00:19:23.977 fused_ordering(537) 00:19:23.977 fused_ordering(538) 00:19:23.977 fused_ordering(539) 00:19:23.977 fused_ordering(540) 00:19:23.977 fused_ordering(541) 00:19:23.977 fused_ordering(542) 00:19:23.977 fused_ordering(543) 00:19:23.977 fused_ordering(544) 00:19:23.977 fused_ordering(545) 00:19:23.977 fused_ordering(546) 00:19:23.977 fused_ordering(547) 00:19:23.977 fused_ordering(548) 00:19:23.977 fused_ordering(549) 00:19:23.977 fused_ordering(550) 00:19:23.977 fused_ordering(551) 00:19:23.977 fused_ordering(552) 00:19:23.977 fused_ordering(553) 00:19:23.977 fused_ordering(554) 00:19:23.977 fused_ordering(555) 00:19:23.977 fused_ordering(556) 00:19:23.977 fused_ordering(557) 00:19:23.977 fused_ordering(558) 00:19:23.977 fused_ordering(559) 00:19:23.977 fused_ordering(560) 00:19:23.977 fused_ordering(561) 00:19:23.977 fused_ordering(562) 00:19:23.977 fused_ordering(563) 00:19:23.977 fused_ordering(564) 00:19:23.977 fused_ordering(565) 00:19:23.977 fused_ordering(566) 00:19:23.977 fused_ordering(567) 00:19:23.977 fused_ordering(568) 00:19:23.977 fused_ordering(569) 00:19:23.977 fused_ordering(570) 00:19:23.977 fused_ordering(571) 00:19:23.977 fused_ordering(572) 00:19:23.977 fused_ordering(573) 00:19:23.977 fused_ordering(574) 00:19:23.977 fused_ordering(575) 00:19:23.977 fused_ordering(576) 00:19:23.977 fused_ordering(577) 00:19:23.977 fused_ordering(578) 00:19:23.977 fused_ordering(579) 00:19:23.977 fused_ordering(580) 00:19:23.977 fused_ordering(581) 00:19:23.977 fused_ordering(582) 00:19:23.977 fused_ordering(583) 00:19:23.977 fused_ordering(584) 00:19:23.977 fused_ordering(585) 00:19:23.977 fused_ordering(586) 00:19:23.977 fused_ordering(587) 00:19:23.977 fused_ordering(588) 00:19:23.977 fused_ordering(589) 00:19:23.977 fused_ordering(590) 00:19:23.977 fused_ordering(591) 00:19:23.977 fused_ordering(592) 00:19:23.977 fused_ordering(593) 00:19:23.977 fused_ordering(594) 00:19:23.977 fused_ordering(595) 00:19:23.977 fused_ordering(596) 00:19:23.977 fused_ordering(597) 00:19:23.977 fused_ordering(598) 00:19:23.977 fused_ordering(599) 00:19:23.977 fused_ordering(600) 00:19:23.977 fused_ordering(601) 00:19:23.977 fused_ordering(602) 00:19:23.977 fused_ordering(603) 00:19:23.977 fused_ordering(604) 00:19:23.977 fused_ordering(605) 00:19:23.977 fused_ordering(606) 00:19:23.977 fused_ordering(607) 00:19:23.977 fused_ordering(608) 00:19:23.977 fused_ordering(609) 00:19:23.977 fused_ordering(610) 00:19:23.977 fused_ordering(611) 00:19:23.977 fused_ordering(612) 00:19:23.977 fused_ordering(613) 00:19:23.977 fused_ordering(614) 00:19:23.977 fused_ordering(615) 00:19:24.236 fused_ordering(616) 00:19:24.236 fused_ordering(617) 00:19:24.236 fused_ordering(618) 00:19:24.236 fused_ordering(619) 00:19:24.236 fused_ordering(620) 00:19:24.236 fused_ordering(621) 00:19:24.236 fused_ordering(622) 00:19:24.236 fused_ordering(623) 00:19:24.236 fused_ordering(624) 00:19:24.236 fused_ordering(625) 00:19:24.236 fused_ordering(626) 00:19:24.236 fused_ordering(627) 00:19:24.236 fused_ordering(628) 00:19:24.236 fused_ordering(629) 00:19:24.236 fused_ordering(630) 00:19:24.236 fused_ordering(631) 00:19:24.236 fused_ordering(632) 00:19:24.236 fused_ordering(633) 00:19:24.236 fused_ordering(634) 00:19:24.236 fused_ordering(635) 00:19:24.236 fused_ordering(636) 00:19:24.236 fused_ordering(637) 00:19:24.236 fused_ordering(638) 00:19:24.236 fused_ordering(639) 00:19:24.236 fused_ordering(640) 00:19:24.236 fused_ordering(641) 00:19:24.236 fused_ordering(642) 00:19:24.236 fused_ordering(643) 00:19:24.236 fused_ordering(644) 00:19:24.236 fused_ordering(645) 00:19:24.236 fused_ordering(646) 00:19:24.236 fused_ordering(647) 00:19:24.236 fused_ordering(648) 00:19:24.236 fused_ordering(649) 00:19:24.236 fused_ordering(650) 00:19:24.236 fused_ordering(651) 00:19:24.236 fused_ordering(652) 00:19:24.236 fused_ordering(653) 00:19:24.236 fused_ordering(654) 00:19:24.236 fused_ordering(655) 00:19:24.236 fused_ordering(656) 00:19:24.236 fused_ordering(657) 00:19:24.236 fused_ordering(658) 00:19:24.236 fused_ordering(659) 00:19:24.236 fused_ordering(660) 00:19:24.236 fused_ordering(661) 00:19:24.236 fused_ordering(662) 00:19:24.236 fused_ordering(663) 00:19:24.236 fused_ordering(664) 00:19:24.236 fused_ordering(665) 00:19:24.236 fused_ordering(666) 00:19:24.236 fused_ordering(667) 00:19:24.236 fused_ordering(668) 00:19:24.236 fused_ordering(669) 00:19:24.236 fused_ordering(670) 00:19:24.236 fused_ordering(671) 00:19:24.236 fused_ordering(672) 00:19:24.236 fused_ordering(673) 00:19:24.236 fused_ordering(674) 00:19:24.236 fused_ordering(675) 00:19:24.236 fused_ordering(676) 00:19:24.236 fused_ordering(677) 00:19:24.236 fused_ordering(678) 00:19:24.236 fused_ordering(679) 00:19:24.236 fused_ordering(680) 00:19:24.236 fused_ordering(681) 00:19:24.236 fused_ordering(682) 00:19:24.236 fused_ordering(683) 00:19:24.236 fused_ordering(684) 00:19:24.236 fused_ordering(685) 00:19:24.236 fused_ordering(686) 00:19:24.236 fused_ordering(687) 00:19:24.236 fused_ordering(688) 00:19:24.236 fused_ordering(689) 00:19:24.236 fused_ordering(690) 00:19:24.236 fused_ordering(691) 00:19:24.236 fused_ordering(692) 00:19:24.236 fused_ordering(693) 00:19:24.236 fused_ordering(694) 00:19:24.236 fused_ordering(695) 00:19:24.236 fused_ordering(696) 00:19:24.236 fused_ordering(697) 00:19:24.236 fused_ordering(698) 00:19:24.236 fused_ordering(699) 00:19:24.236 fused_ordering(700) 00:19:24.236 fused_ordering(701) 00:19:24.236 fused_ordering(702) 00:19:24.236 fused_ordering(703) 00:19:24.236 fused_ordering(704) 00:19:24.236 fused_ordering(705) 00:19:24.236 fused_ordering(706) 00:19:24.236 fused_ordering(707) 00:19:24.236 fused_ordering(708) 00:19:24.236 fused_ordering(709) 00:19:24.236 fused_ordering(710) 00:19:24.236 fused_ordering(711) 00:19:24.236 fused_ordering(712) 00:19:24.236 fused_ordering(713) 00:19:24.236 fused_ordering(714) 00:19:24.236 fused_ordering(715) 00:19:24.236 fused_ordering(716) 00:19:24.236 fused_ordering(717) 00:19:24.236 fused_ordering(718) 00:19:24.236 fused_ordering(719) 00:19:24.236 fused_ordering(720) 00:19:24.236 fused_ordering(721) 00:19:24.236 fused_ordering(722) 00:19:24.236 fused_ordering(723) 00:19:24.236 fused_ordering(724) 00:19:24.236 fused_ordering(725) 00:19:24.236 fused_ordering(726) 00:19:24.236 fused_ordering(727) 00:19:24.236 fused_ordering(728) 00:19:24.236 fused_ordering(729) 00:19:24.237 fused_ordering(730) 00:19:24.237 fused_ordering(731) 00:19:24.237 fused_ordering(732) 00:19:24.237 fused_ordering(733) 00:19:24.237 fused_ordering(734) 00:19:24.237 fused_ordering(735) 00:19:24.237 fused_ordering(736) 00:19:24.237 fused_ordering(737) 00:19:24.237 fused_ordering(738) 00:19:24.237 fused_ordering(739) 00:19:24.237 fused_ordering(740) 00:19:24.237 fused_ordering(741) 00:19:24.237 fused_ordering(742) 00:19:24.237 fused_ordering(743) 00:19:24.237 fused_ordering(744) 00:19:24.237 fused_ordering(745) 00:19:24.237 fused_ordering(746) 00:19:24.237 fused_ordering(747) 00:19:24.237 fused_ordering(748) 00:19:24.237 fused_ordering(749) 00:19:24.237 fused_ordering(750) 00:19:24.237 fused_ordering(751) 00:19:24.237 fused_ordering(752) 00:19:24.237 fused_ordering(753) 00:19:24.237 fused_ordering(754) 00:19:24.237 fused_ordering(755) 00:19:24.237 fused_ordering(756) 00:19:24.237 fused_ordering(757) 00:19:24.237 fused_ordering(758) 00:19:24.237 fused_ordering(759) 00:19:24.237 fused_ordering(760) 00:19:24.237 fused_ordering(761) 00:19:24.237 fused_ordering(762) 00:19:24.237 fused_ordering(763) 00:19:24.237 fused_ordering(764) 00:19:24.237 fused_ordering(765) 00:19:24.237 fused_ordering(766) 00:19:24.237 fused_ordering(767) 00:19:24.237 fused_ordering(768) 00:19:24.237 fused_ordering(769) 00:19:24.237 fused_ordering(770) 00:19:24.237 fused_ordering(771) 00:19:24.237 fused_ordering(772) 00:19:24.237 fused_ordering(773) 00:19:24.237 fused_ordering(774) 00:19:24.237 fused_ordering(775) 00:19:24.237 fused_ordering(776) 00:19:24.237 fused_ordering(777) 00:19:24.237 fused_ordering(778) 00:19:24.237 fused_ordering(779) 00:19:24.237 fused_ordering(780) 00:19:24.237 fused_ordering(781) 00:19:24.237 fused_ordering(782) 00:19:24.237 fused_ordering(783) 00:19:24.237 fused_ordering(784) 00:19:24.237 fused_ordering(785) 00:19:24.237 fused_ordering(786) 00:19:24.237 fused_ordering(787) 00:19:24.237 fused_ordering(788) 00:19:24.237 fused_ordering(789) 00:19:24.237 fused_ordering(790) 00:19:24.237 fused_ordering(791) 00:19:24.237 fused_ordering(792) 00:19:24.237 fused_ordering(793) 00:19:24.237 fused_ordering(794) 00:19:24.237 fused_ordering(795) 00:19:24.237 fused_ordering(796) 00:19:24.237 fused_ordering(797) 00:19:24.237 fused_ordering(798) 00:19:24.237 fused_ordering(799) 00:19:24.237 fused_ordering(800) 00:19:24.237 fused_ordering(801) 00:19:24.237 fused_ordering(802) 00:19:24.237 fused_ordering(803) 00:19:24.237 fused_ordering(804) 00:19:24.237 fused_ordering(805) 00:19:24.237 fused_ordering(806) 00:19:24.237 fused_ordering(807) 00:19:24.237 fused_ordering(808) 00:19:24.237 fused_ordering(809) 00:19:24.237 fused_ordering(810) 00:19:24.237 fused_ordering(811) 00:19:24.237 fused_ordering(812) 00:19:24.237 fused_ordering(813) 00:19:24.237 fused_ordering(814) 00:19:24.237 fused_ordering(815) 00:19:24.237 fused_ordering(816) 00:19:24.237 fused_ordering(817) 00:19:24.237 fused_ordering(818) 00:19:24.237 fused_ordering(819) 00:19:24.237 fused_ordering(820) 00:19:24.805 fused_ordering(821) 00:19:24.805 fused_ordering(822) 00:19:24.805 fused_ordering(823) 00:19:24.805 fused_ordering(824) 00:19:24.805 fused_ordering(825) 00:19:24.805 fused_ordering(826) 00:19:24.805 fused_ordering(827) 00:19:24.805 fused_ordering(828) 00:19:24.805 fused_ordering(829) 00:19:24.805 fused_ordering(830) 00:19:24.805 fused_ordering(831) 00:19:24.805 fused_ordering(832) 00:19:24.805 fused_ordering(833) 00:19:24.805 fused_ordering(834) 00:19:24.805 fused_ordering(835) 00:19:24.805 fused_ordering(836) 00:19:24.805 fused_ordering(837) 00:19:24.805 fused_ordering(838) 00:19:24.805 fused_ordering(839) 00:19:24.805 fused_ordering(840) 00:19:24.805 fused_ordering(841) 00:19:24.805 fused_ordering(842) 00:19:24.805 fused_ordering(843) 00:19:24.805 fused_ordering(844) 00:19:24.805 fused_ordering(845) 00:19:24.805 fused_ordering(846) 00:19:24.805 fused_ordering(847) 00:19:24.805 fused_ordering(848) 00:19:24.805 fused_ordering(849) 00:19:24.805 fused_ordering(850) 00:19:24.805 fused_ordering(851) 00:19:24.805 fused_ordering(852) 00:19:24.805 fused_ordering(853) 00:19:24.805 fused_ordering(854) 00:19:24.805 fused_ordering(855) 00:19:24.805 fused_ordering(856) 00:19:24.805 fused_ordering(857) 00:19:24.805 fused_ordering(858) 00:19:24.805 fused_ordering(859) 00:19:24.805 fused_ordering(860) 00:19:24.805 fused_ordering(861) 00:19:24.805 fused_ordering(862) 00:19:24.805 fused_ordering(863) 00:19:24.805 fused_ordering(864) 00:19:24.805 fused_ordering(865) 00:19:24.805 fused_ordering(866) 00:19:24.805 fused_ordering(867) 00:19:24.805 fused_ordering(868) 00:19:24.805 fused_ordering(869) 00:19:24.805 fused_ordering(870) 00:19:24.805 fused_ordering(871) 00:19:24.805 fused_ordering(872) 00:19:24.805 fused_ordering(873) 00:19:24.805 fused_ordering(874) 00:19:24.805 fused_ordering(875) 00:19:24.805 fused_ordering(876) 00:19:24.805 fused_ordering(877) 00:19:24.805 fused_ordering(878) 00:19:24.805 fused_ordering(879) 00:19:24.805 fused_ordering(880) 00:19:24.805 fused_ordering(881) 00:19:24.805 fused_ordering(882) 00:19:24.805 fused_ordering(883) 00:19:24.805 fused_ordering(884) 00:19:24.805 fused_ordering(885) 00:19:24.805 fused_ordering(886) 00:19:24.805 fused_ordering(887) 00:19:24.805 fused_ordering(888) 00:19:24.805 fused_ordering(889) 00:19:24.805 fused_ordering(890) 00:19:24.805 fused_ordering(891) 00:19:24.805 fused_ordering(892) 00:19:24.805 fused_ordering(893) 00:19:24.805 fused_ordering(894) 00:19:24.805 fused_ordering(895) 00:19:24.805 fused_ordering(896) 00:19:24.805 fused_ordering(897) 00:19:24.805 fused_ordering(898) 00:19:24.805 fused_ordering(899) 00:19:24.805 fused_ordering(900) 00:19:24.805 fused_ordering(901) 00:19:24.805 fused_ordering(902) 00:19:24.805 fused_ordering(903) 00:19:24.805 fused_ordering(904) 00:19:24.805 fused_ordering(905) 00:19:24.805 fused_ordering(906) 00:19:24.805 fused_ordering(907) 00:19:24.805 fused_ordering(908) 00:19:24.805 fused_ordering(909) 00:19:24.805 fused_ordering(910) 00:19:24.805 fused_ordering(911) 00:19:24.805 fused_ordering(912) 00:19:24.805 fused_ordering(913) 00:19:24.805 fused_ordering(914) 00:19:24.805 fused_ordering(915) 00:19:24.805 fused_ordering(916) 00:19:24.805 fused_ordering(917) 00:19:24.805 fused_ordering(918) 00:19:24.805 fused_ordering(919) 00:19:24.805 fused_ordering(920) 00:19:24.805 fused_ordering(921) 00:19:24.805 fused_ordering(922) 00:19:24.805 fused_ordering(923) 00:19:24.805 fused_ordering(924) 00:19:24.805 fused_ordering(925) 00:19:24.805 fused_ordering(926) 00:19:24.805 fused_ordering(927) 00:19:24.805 fused_ordering(928) 00:19:24.805 fused_ordering(929) 00:19:24.805 fused_ordering(930) 00:19:24.805 fused_ordering(931) 00:19:24.805 fused_ordering(932) 00:19:24.805 fused_ordering(933) 00:19:24.805 fused_ordering(934) 00:19:24.805 fused_ordering(935) 00:19:24.805 fused_ordering(936) 00:19:24.805 fused_ordering(937) 00:19:24.805 fused_ordering(938) 00:19:24.805 fused_ordering(939) 00:19:24.805 fused_ordering(940) 00:19:24.805 fused_ordering(941) 00:19:24.805 fused_ordering(942) 00:19:24.805 fused_ordering(943) 00:19:24.805 fused_ordering(944) 00:19:24.805 fused_ordering(945) 00:19:24.805 fused_ordering(946) 00:19:24.805 fused_ordering(947) 00:19:24.805 fused_ordering(948) 00:19:24.805 fused_ordering(949) 00:19:24.805 fused_ordering(950) 00:19:24.805 fused_ordering(951) 00:19:24.805 fused_ordering(952) 00:19:24.805 fused_ordering(953) 00:19:24.805 fused_ordering(954) 00:19:24.805 fused_ordering(955) 00:19:24.805 fused_ordering(956) 00:19:24.805 fused_ordering(957) 00:19:24.805 fused_ordering(958) 00:19:24.805 fused_ordering(959) 00:19:24.805 fused_ordering(960) 00:19:24.805 fused_ordering(961) 00:19:24.805 fused_ordering(962) 00:19:24.805 fused_ordering(963) 00:19:24.805 fused_ordering(964) 00:19:24.805 fused_ordering(965) 00:19:24.805 fused_ordering(966) 00:19:24.805 fused_ordering(967) 00:19:24.805 fused_ordering(968) 00:19:24.805 fused_ordering(969) 00:19:24.805 fused_ordering(970) 00:19:24.805 fused_ordering(971) 00:19:24.805 fused_ordering(972) 00:19:24.805 fused_ordering(973) 00:19:24.805 fused_ordering(974) 00:19:24.805 fused_ordering(975) 00:19:24.805 fused_ordering(976) 00:19:24.805 fused_ordering(977) 00:19:24.805 fused_ordering(978) 00:19:24.805 fused_ordering(979) 00:19:24.805 fused_ordering(980) 00:19:24.805 fused_ordering(981) 00:19:24.805 fused_ordering(982) 00:19:24.805 fused_ordering(983) 00:19:24.805 fused_ordering(984) 00:19:24.805 fused_ordering(985) 00:19:24.805 fused_ordering(986) 00:19:24.805 fused_ordering(987) 00:19:24.805 fused_ordering(988) 00:19:24.805 fused_ordering(989) 00:19:24.805 fused_ordering(990) 00:19:24.805 fused_ordering(991) 00:19:24.805 fused_ordering(992) 00:19:24.805 fused_ordering(993) 00:19:24.805 fused_ordering(994) 00:19:24.805 fused_ordering(995) 00:19:24.805 fused_ordering(996) 00:19:24.805 fused_ordering(997) 00:19:24.805 fused_ordering(998) 00:19:24.805 fused_ordering(999) 00:19:24.805 fused_ordering(1000) 00:19:24.805 fused_ordering(1001) 00:19:24.805 fused_ordering(1002) 00:19:24.805 fused_ordering(1003) 00:19:24.805 fused_ordering(1004) 00:19:24.805 fused_ordering(1005) 00:19:24.805 fused_ordering(1006) 00:19:24.805 fused_ordering(1007) 00:19:24.805 fused_ordering(1008) 00:19:24.805 fused_ordering(1009) 00:19:24.805 fused_ordering(1010) 00:19:24.805 fused_ordering(1011) 00:19:24.805 fused_ordering(1012) 00:19:24.805 fused_ordering(1013) 00:19:24.805 fused_ordering(1014) 00:19:24.805 fused_ordering(1015) 00:19:24.805 fused_ordering(1016) 00:19:24.805 fused_ordering(1017) 00:19:24.805 fused_ordering(1018) 00:19:24.805 fused_ordering(1019) 00:19:24.805 fused_ordering(1020) 00:19:24.805 fused_ordering(1021) 00:19:24.805 fused_ordering(1022) 00:19:24.805 fused_ordering(1023) 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.805 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.805 rmmod nvme_tcp 00:19:24.805 rmmod nvme_fabrics 00:19:24.806 rmmod nvme_keyring 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3841756 ']' 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3841756 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3841756 ']' 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3841756 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:24.806 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3841756 00:19:25.065 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:25.065 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:25.065 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3841756' 00:19:25.065 killing process with pid 3841756 00:19:25.065 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3841756 00:19:25.065 15:23:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3841756 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.001 15:23:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.535 00:19:28.535 real 0m12.529s 00:19:28.535 user 0m7.113s 00:19:28.535 sys 0m6.014s 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:28.535 ************************************ 00:19:28.535 END TEST nvmf_fused_ordering 00:19:28.535 ************************************ 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.535 ************************************ 00:19:28.535 START TEST nvmf_ns_masking 00:19:28.535 ************************************ 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:28.535 * Looking for test storage... 00:19:28.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:28.535 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:28.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.536 --rc genhtml_branch_coverage=1 00:19:28.536 --rc genhtml_function_coverage=1 00:19:28.536 --rc genhtml_legend=1 00:19:28.536 --rc geninfo_all_blocks=1 00:19:28.536 --rc geninfo_unexecuted_blocks=1 00:19:28.536 00:19:28.536 ' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:28.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.536 --rc genhtml_branch_coverage=1 00:19:28.536 --rc genhtml_function_coverage=1 00:19:28.536 --rc genhtml_legend=1 00:19:28.536 --rc geninfo_all_blocks=1 00:19:28.536 --rc geninfo_unexecuted_blocks=1 00:19:28.536 00:19:28.536 ' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:28.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.536 --rc genhtml_branch_coverage=1 00:19:28.536 --rc genhtml_function_coverage=1 00:19:28.536 --rc genhtml_legend=1 00:19:28.536 --rc geninfo_all_blocks=1 00:19:28.536 --rc geninfo_unexecuted_blocks=1 00:19:28.536 00:19:28.536 ' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:28.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.536 --rc genhtml_branch_coverage=1 00:19:28.536 --rc genhtml_function_coverage=1 00:19:28.536 --rc genhtml_legend=1 00:19:28.536 --rc geninfo_all_blocks=1 00:19:28.536 --rc geninfo_unexecuted_blocks=1 00:19:28.536 00:19:28.536 ' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.536 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d04c5d5a-eb0d-48f3-b24e-23e05816ac52 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=09cc266f-1590-406c-ba7d-236c7dff8889 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5d5fe83e-1b1c-4b1b-82b5-294315effffc 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:28.537 15:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.100 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:35.101 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:35.101 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:35.101 Found net devices under 0000:86:00.0: cvl_0_0 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:35.101 Found net devices under 0000:86:00.1: cvl_0_1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:35.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:19:35.101 00:19:35.101 --- 10.0.0.2 ping statistics --- 00:19:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.101 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:19:35.101 00:19:35.101 --- 10.0.0.1 ping statistics --- 00:19:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.101 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3846100 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3846100 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3846100 ']' 00:19:35.101 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.102 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.102 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.102 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.102 15:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:35.102 [2024-11-06 15:24:01.951966] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:35.102 [2024-11-06 15:24:01.952060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.102 [2024-11-06 15:24:02.081266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.102 [2024-11-06 15:24:02.192449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.102 [2024-11-06 15:24:02.192491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.102 [2024-11-06 15:24:02.192501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.102 [2024-11-06 15:24:02.192527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.102 [2024-11-06 15:24:02.192535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.102 [2024-11-06 15:24:02.194095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.359 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.359 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:35.359 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.359 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:35.360 [2024-11-06 15:24:02.959346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:35.360 15:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:35.925 Malloc1 00:19:35.925 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:35.925 Malloc2 00:19:35.925 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:36.183 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:36.441 15:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.441 [2024-11-06 15:24:04.065383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d5fe83e-1b1c-4b1b-82b5-294315effffc -a 10.0.0.2 -s 4420 -i 4 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:36.699 15:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:39.229 [ 0]:0x1 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6966b512074cc1bf97c4a7509f9391 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6966b512074cc1bf97c4a7509f9391 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:39.229 [ 0]:0x1 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6966b512074cc1bf97c4a7509f9391 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6966b512074cc1bf97c4a7509f9391 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:39.229 [ 1]:0x2 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:39.229 15:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:39.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.488 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.747 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d5fe83e-1b1c-4b1b-82b5-294315effffc -a 10.0.0.2 -s 4420 -i 4 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:19:40.007 15:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:42.540 [ 0]:0x2 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.540 [ 0]:0x1 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:42.540 15:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6966b512074cc1bf97c4a7509f9391 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6966b512074cc1bf97c4a7509f9391 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:42.540 [ 1]:0x2 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.540 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:42.799 [ 0]:0x2 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:42.799 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:43.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:43.057 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:43.057 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:43.057 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5d5fe83e-1b1c-4b1b-82b5-294315effffc -a 10.0.0.2 -s 4420 -i 4 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:43.316 15:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:45.219 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:45.477 [ 0]:0x1 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:45.477 15:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af6966b512074cc1bf97c4a7509f9391 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af6966b512074cc1bf97c4a7509f9391 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:45.477 [ 1]:0x2 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:45.477 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:45.736 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:45.995 [ 0]:0x2 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:45.995 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:45.995 [2024-11-06 15:24:13.607247] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:45.995 request: 00:19:45.995 { 00:19:45.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.995 "nsid": 2, 00:19:45.995 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.995 "method": "nvmf_ns_remove_host", 00:19:45.995 "req_id": 1 00:19:45.995 } 00:19:45.995 Got JSON-RPC error response 00:19:45.995 response: 00:19:45.995 { 00:19:45.995 "code": -32602, 00:19:45.995 "message": "Invalid parameters" 00:19:45.995 } 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:46.255 [ 0]:0x2 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4b8139a1310f4d53ab98b1adbb5021ae 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4b8139a1310f4d53ab98b1adbb5021ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:46.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3848504 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3848504 /var/tmp/host.sock 00:19:46.255 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3848504 ']' 00:19:46.256 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:46.256 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.256 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:46.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:46.256 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.256 15:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:46.256 [2024-11-06 15:24:13.876814] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:46.256 [2024-11-06 15:24:13.876903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848504 ] 00:19:46.514 [2024-11-06 15:24:14.000436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.514 [2024-11-06 15:24:14.109904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.451 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:47.451 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:47.451 15:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.709 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:47.709 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d04c5d5a-eb0d-48f3-b24e-23e05816ac52 00:19:47.710 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:47.710 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D04C5D5AEB0D48F3B24E23E05816AC52 -i 00:19:47.968 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 09cc266f-1590-406c-ba7d-236c7dff8889 00:19:47.968 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:47.968 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 09CC266F1590406CBA7D236C7DFF8889 -i 00:19:48.227 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:48.487 15:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:48.487 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:48.487 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:49.054 nvme0n1 00:19:49.054 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:49.054 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:49.313 nvme1n2 00:19:49.313 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:49.313 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:49.313 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:49.313 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:49.313 15:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:49.572 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:49.572 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:49.572 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:49.572 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d04c5d5a-eb0d-48f3-b24e-23e05816ac52 == \d\0\4\c\5\d\5\a\-\e\b\0\d\-\4\8\f\3\-\b\2\4\e\-\2\3\e\0\5\8\1\6\a\c\5\2 ]] 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 09cc266f-1590-406c-ba7d-236c7dff8889 == \0\9\c\c\2\6\6\f\-\1\5\9\0\-\4\0\6\c\-\b\a\7\d\-\2\3\6\c\7\d\f\f\8\8\8\9 ]] 00:19:49.831 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.090 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d04c5d5a-eb0d-48f3-b24e-23e05816ac52 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D04C5D5AEB0D48F3B24E23E05816AC52 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D04C5D5AEB0D48F3B24E23E05816AC52 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:50.349 15:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D04C5D5AEB0D48F3B24E23E05816AC52 00:19:50.608 [2024-11-06 15:24:17.992146] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:50.608 [2024-11-06 15:24:17.992186] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:50.608 [2024-11-06 15:24:17.992199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:50.608 request: 00:19:50.608 { 00:19:50.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.608 "namespace": { 00:19:50.608 "bdev_name": "invalid", 00:19:50.608 "nsid": 1, 00:19:50.608 "nguid": "D04C5D5AEB0D48F3B24E23E05816AC52", 00:19:50.608 "no_auto_visible": false 00:19:50.608 }, 00:19:50.608 "method": "nvmf_subsystem_add_ns", 00:19:50.608 "req_id": 1 00:19:50.608 } 00:19:50.608 Got JSON-RPC error response 00:19:50.608 response: 00:19:50.608 { 00:19:50.608 "code": -32602, 00:19:50.608 "message": "Invalid parameters" 00:19:50.608 } 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d04c5d5a-eb0d-48f3-b24e-23e05816ac52 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D04C5D5AEB0D48F3B24E23E05816AC52 -i 00:19:50.608 15:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3848504 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3848504 ']' 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3848504 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3848504 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3848504' 00:19:53.244 killing process with pid 3848504 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3848504 00:19:53.244 15:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3848504 00:19:55.149 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:55.408 rmmod nvme_tcp 00:19:55.408 rmmod nvme_fabrics 00:19:55.408 rmmod nvme_keyring 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3846100 ']' 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3846100 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3846100 ']' 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3846100 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.408 15:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3846100 00:19:55.408 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:55.408 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:55.408 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3846100' 00:19:55.408 killing process with pid 3846100 00:19:55.408 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3846100 00:19:55.408 15:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3846100 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.313 15:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:59.220 00:19:59.220 real 0m30.859s 00:19:59.220 user 0m38.760s 00:19:59.220 sys 0m7.199s 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:59.220 ************************************ 00:19:59.220 END TEST nvmf_ns_masking 00:19:59.220 ************************************ 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.220 ************************************ 00:19:59.220 START TEST nvmf_nvme_cli 00:19:59.220 ************************************ 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:59.220 * Looking for test storage... 00:19:59.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.220 --rc genhtml_branch_coverage=1 00:19:59.220 --rc genhtml_function_coverage=1 00:19:59.220 --rc genhtml_legend=1 00:19:59.220 --rc geninfo_all_blocks=1 00:19:59.220 --rc geninfo_unexecuted_blocks=1 00:19:59.220 00:19:59.220 ' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.220 --rc genhtml_branch_coverage=1 00:19:59.220 --rc genhtml_function_coverage=1 00:19:59.220 --rc genhtml_legend=1 00:19:59.220 --rc geninfo_all_blocks=1 00:19:59.220 --rc geninfo_unexecuted_blocks=1 00:19:59.220 00:19:59.220 ' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.220 --rc genhtml_branch_coverage=1 00:19:59.220 --rc genhtml_function_coverage=1 00:19:59.220 --rc genhtml_legend=1 00:19:59.220 --rc geninfo_all_blocks=1 00:19:59.220 --rc geninfo_unexecuted_blocks=1 00:19:59.220 00:19:59.220 ' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:59.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.220 --rc genhtml_branch_coverage=1 00:19:59.220 --rc genhtml_function_coverage=1 00:19:59.220 --rc genhtml_legend=1 00:19:59.220 --rc geninfo_all_blocks=1 00:19:59.220 --rc geninfo_unexecuted_blocks=1 00:19:59.220 00:19:59.220 ' 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.220 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.221 15:24:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:05.791 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.791 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:05.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:05.792 Found net devices under 0000:86:00.0: cvl_0_0 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:05.792 Found net devices under 0000:86:00.1: cvl_0_1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:05.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:20:05.792 00:20:05.792 --- 10.0.0.2 ping statistics --- 00:20:05.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.792 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:05.792 00:20:05.792 --- 10.0.0.1 ping statistics --- 00:20:05.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.792 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3853763 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3853763 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3853763 ']' 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.792 15:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:05.792 [2024-11-06 15:24:32.825804] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:05.792 [2024-11-06 15:24:32.825890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.792 [2024-11-06 15:24:32.953959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.792 [2024-11-06 15:24:33.063724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.792 [2024-11-06 15:24:33.063768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.792 [2024-11-06 15:24:33.063779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.792 [2024-11-06 15:24:33.063788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.792 [2024-11-06 15:24:33.063795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.792 [2024-11-06 15:24:33.066428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.792 [2024-11-06 15:24:33.066465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.792 [2024-11-06 15:24:33.066558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.792 [2024-11-06 15:24:33.066581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.051 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.051 [2024-11-06 15:24:33.672852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 Malloc0 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 Malloc1 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 [2024-11-06 15:24:33.895429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.311 15:24:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:20:06.570 00:20:06.570 Discovery Log Number of Records 2, Generation counter 2 00:20:06.570 =====Discovery Log Entry 0====== 00:20:06.570 trtype: tcp 00:20:06.570 adrfam: ipv4 00:20:06.570 subtype: current discovery subsystem 00:20:06.570 treq: not required 00:20:06.570 portid: 0 00:20:06.570 trsvcid: 4420 00:20:06.570 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:06.570 traddr: 10.0.0.2 00:20:06.570 eflags: explicit discovery connections, duplicate discovery information 00:20:06.570 sectype: none 00:20:06.570 =====Discovery Log Entry 1====== 00:20:06.570 trtype: tcp 00:20:06.570 adrfam: ipv4 00:20:06.570 subtype: nvme subsystem 00:20:06.570 treq: not required 00:20:06.570 portid: 0 00:20:06.570 trsvcid: 4420 00:20:06.570 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:06.570 traddr: 10.0.0.2 00:20:06.570 eflags: none 00:20:06.570 sectype: none 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:06.570 15:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:20:07.947 15:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:09.852 /dev/nvme0n2 ]] 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:09.852 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:10.111 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:10.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:10.370 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:10.371 15:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.371 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:10.371 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.371 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:10.371 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.371 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.630 rmmod nvme_tcp 00:20:10.630 rmmod nvme_fabrics 00:20:10.630 rmmod nvme_keyring 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3853763 ']' 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3853763 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3853763 ']' 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3853763 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3853763 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3853763' 00:20:10.630 killing process with pid 3853763 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3853763 00:20:10.630 15:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3853763 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.015 15:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.553 00:20:14.553 real 0m15.082s 00:20:14.553 user 0m26.844s 00:20:14.553 sys 0m5.261s 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:14.553 ************************************ 00:20:14.553 END TEST nvmf_nvme_cli 00:20:14.553 ************************************ 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.553 ************************************ 00:20:14.553 START TEST nvmf_auth_target 00:20:14.553 ************************************ 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:14.553 * Looking for test storage... 00:20:14.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.553 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:14.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.553 --rc genhtml_branch_coverage=1 00:20:14.553 --rc genhtml_function_coverage=1 00:20:14.554 --rc genhtml_legend=1 00:20:14.554 --rc geninfo_all_blocks=1 00:20:14.554 --rc geninfo_unexecuted_blocks=1 00:20:14.554 00:20:14.554 ' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:14.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.554 --rc genhtml_branch_coverage=1 00:20:14.554 --rc genhtml_function_coverage=1 00:20:14.554 --rc genhtml_legend=1 00:20:14.554 --rc geninfo_all_blocks=1 00:20:14.554 --rc geninfo_unexecuted_blocks=1 00:20:14.554 00:20:14.554 ' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:14.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.554 --rc genhtml_branch_coverage=1 00:20:14.554 --rc genhtml_function_coverage=1 00:20:14.554 --rc genhtml_legend=1 00:20:14.554 --rc geninfo_all_blocks=1 00:20:14.554 --rc geninfo_unexecuted_blocks=1 00:20:14.554 00:20:14.554 ' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:14.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.554 --rc genhtml_branch_coverage=1 00:20:14.554 --rc genhtml_function_coverage=1 00:20:14.554 --rc genhtml_legend=1 00:20:14.554 --rc geninfo_all_blocks=1 00:20:14.554 --rc geninfo_unexecuted_blocks=1 00:20:14.554 00:20:14.554 ' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.554 15:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.131 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.131 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.131 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.131 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:21.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:21.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:21.132 Found net devices under 0000:86:00.0: cvl_0_0 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:21.132 Found net devices under 0000:86:00.1: cvl_0_1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:20:21.132 00:20:21.132 --- 10.0.0.2 ping statistics --- 00:20:21.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.132 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:20:21.132 00:20:21.132 --- 10.0.0.1 ping statistics --- 00:20:21.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.132 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:21.132 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3858403 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3858403 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3858403 ']' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.133 15:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3858641 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3525fdc499e412c797836232cdef2d94acb376ad7aef20ea 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hBQ 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3525fdc499e412c797836232cdef2d94acb376ad7aef20ea 0 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3525fdc499e412c797836232cdef2d94acb376ad7aef20ea 0 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3525fdc499e412c797836232cdef2d94acb376ad7aef20ea 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hBQ 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hBQ 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hBQ 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aef38e36a495b953aac98f8e8d4f5a26212869ea060d8cf6e2872bd900137c09 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.29r 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aef38e36a495b953aac98f8e8d4f5a26212869ea060d8cf6e2872bd900137c09 3 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aef38e36a495b953aac98f8e8d4f5a26212869ea060d8cf6e2872bd900137c09 3 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aef38e36a495b953aac98f8e8d4f5a26212869ea060d8cf6e2872bd900137c09 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.393 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.29r 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.29r 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.29r 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=be10ff5f80e995087679899f013f4dc8 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.87V 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key be10ff5f80e995087679899f013f4dc8 1 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 be10ff5f80e995087679899f013f4dc8 1 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=be10ff5f80e995087679899f013f4dc8 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:21.394 15:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.87V 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.87V 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.87V 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:21.394 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=271cf2e259ccdc746fb26488f19dec40eb32ab04d2071aa0 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OEq 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 271cf2e259ccdc746fb26488f19dec40eb32ab04d2071aa0 2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 271cf2e259ccdc746fb26488f19dec40eb32ab04d2071aa0 2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=271cf2e259ccdc746fb26488f19dec40eb32ab04d2071aa0 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OEq 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OEq 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.OEq 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e4e10b06eeef29d0910c4e835dd9c45f29cb731f6ae267c8 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DXC 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e4e10b06eeef29d0910c4e835dd9c45f29cb731f6ae267c8 2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e4e10b06eeef29d0910c4e835dd9c45f29cb731f6ae267c8 2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e4e10b06eeef29d0910c4e835dd9c45f29cb731f6ae267c8 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DXC 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DXC 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.DXC 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d398baf8b4e19ea8eaf65f9db556107c 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NTB 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d398baf8b4e19ea8eaf65f9db556107c 1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d398baf8b4e19ea8eaf65f9db556107c 1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d398baf8b4e19ea8eaf65f9db556107c 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NTB 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NTB 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NTB 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:21.654 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7a46196f3fe3454e64b081bbe94c39e3da6aa92fda53f3c1a4a587047053331a 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.M11 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7a46196f3fe3454e64b081bbe94c39e3da6aa92fda53f3c1a4a587047053331a 3 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7a46196f3fe3454e64b081bbe94c39e3da6aa92fda53f3c1a4a587047053331a 3 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7a46196f3fe3454e64b081bbe94c39e3da6aa92fda53f3c1a4a587047053331a 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.M11 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.M11 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.M11 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3858403 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3858403 ']' 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.655 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3858641 /var/tmp/host.sock 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3858641 ']' 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:21.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.914 15:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hBQ 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hBQ 00:20:22.482 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hBQ 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.29r ]] 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.29r 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.29r 00:20:22.742 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.29r 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.87V 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.87V 00:20:23.002 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.87V 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.OEq ]] 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OEq 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.261 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OEq 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OEq 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DXC 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DXC 00:20:23.262 15:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DXC 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NTB ]] 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NTB 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NTB 00:20:23.521 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NTB 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M11 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.M11 00:20:23.780 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.M11 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.040 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.299 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.559 00:20:24.559 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.559 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.559 15:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.559 { 00:20:24.559 "cntlid": 1, 00:20:24.559 "qid": 0, 00:20:24.559 "state": "enabled", 00:20:24.559 "thread": "nvmf_tgt_poll_group_000", 00:20:24.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:24.559 "listen_address": { 00:20:24.559 "trtype": "TCP", 00:20:24.559 "adrfam": "IPv4", 00:20:24.559 "traddr": "10.0.0.2", 00:20:24.559 "trsvcid": "4420" 00:20:24.559 }, 00:20:24.559 "peer_address": { 00:20:24.559 "trtype": "TCP", 00:20:24.559 "adrfam": "IPv4", 00:20:24.559 "traddr": "10.0.0.1", 00:20:24.559 "trsvcid": "54062" 00:20:24.559 }, 00:20:24.559 "auth": { 00:20:24.559 "state": "completed", 00:20:24.559 "digest": "sha256", 00:20:24.559 "dhgroup": "null" 00:20:24.559 } 00:20:24.559 } 00:20:24.559 ]' 00:20:24.559 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.819 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.078 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:25.078 15:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.647 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.907 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.907 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.167 { 00:20:26.167 "cntlid": 3, 00:20:26.167 "qid": 0, 00:20:26.167 "state": "enabled", 00:20:26.167 "thread": "nvmf_tgt_poll_group_000", 00:20:26.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.167 "listen_address": { 00:20:26.167 "trtype": "TCP", 00:20:26.167 "adrfam": "IPv4", 00:20:26.167 "traddr": "10.0.0.2", 00:20:26.167 "trsvcid": "4420" 00:20:26.167 }, 00:20:26.167 "peer_address": { 00:20:26.167 "trtype": "TCP", 00:20:26.167 "adrfam": "IPv4", 00:20:26.167 "traddr": "10.0.0.1", 00:20:26.167 "trsvcid": "54094" 00:20:26.167 }, 00:20:26.167 "auth": { 00:20:26.167 "state": "completed", 00:20:26.167 "digest": "sha256", 00:20:26.167 "dhgroup": "null" 00:20:26.167 } 00:20:26.167 } 00:20:26.167 ]' 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.167 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.427 15:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.687 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:26.687 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.255 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.256 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.256 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.256 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.515 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.515 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.515 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.515 15:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.515 00:20:27.515 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.515 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.515 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.774 { 00:20:27.774 "cntlid": 5, 00:20:27.774 "qid": 0, 00:20:27.774 "state": "enabled", 00:20:27.774 "thread": "nvmf_tgt_poll_group_000", 00:20:27.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:27.774 "listen_address": { 00:20:27.774 "trtype": "TCP", 00:20:27.774 "adrfam": "IPv4", 00:20:27.774 "traddr": "10.0.0.2", 00:20:27.774 "trsvcid": "4420" 00:20:27.774 }, 00:20:27.774 "peer_address": { 00:20:27.774 "trtype": "TCP", 00:20:27.774 "adrfam": "IPv4", 00:20:27.774 "traddr": "10.0.0.1", 00:20:27.774 "trsvcid": "54118" 00:20:27.774 }, 00:20:27.774 "auth": { 00:20:27.774 "state": "completed", 00:20:27.774 "digest": "sha256", 00:20:27.774 "dhgroup": "null" 00:20:27.774 } 00:20:27.774 } 00:20:27.774 ]' 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.774 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.775 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:28.034 15:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:28.604 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.863 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.123 00:20:29.123 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.123 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.123 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.383 { 00:20:29.383 "cntlid": 7, 00:20:29.383 "qid": 0, 00:20:29.383 "state": "enabled", 00:20:29.383 "thread": "nvmf_tgt_poll_group_000", 00:20:29.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:29.383 "listen_address": { 00:20:29.383 "trtype": "TCP", 00:20:29.383 "adrfam": "IPv4", 00:20:29.383 "traddr": "10.0.0.2", 00:20:29.383 "trsvcid": "4420" 00:20:29.383 }, 00:20:29.383 "peer_address": { 00:20:29.383 "trtype": "TCP", 00:20:29.383 "adrfam": "IPv4", 00:20:29.383 "traddr": "10.0.0.1", 00:20:29.383 "trsvcid": "54140" 00:20:29.383 }, 00:20:29.383 "auth": { 00:20:29.383 "state": "completed", 00:20:29.383 "digest": "sha256", 00:20:29.383 "dhgroup": "null" 00:20:29.383 } 00:20:29.383 } 00:20:29.383 ]' 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:29.383 15:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.383 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.383 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.383 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.642 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:29.642 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.217 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.476 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.477 15:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.736 00:20:30.736 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.736 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.736 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.996 { 00:20:30.996 "cntlid": 9, 00:20:30.996 "qid": 0, 00:20:30.996 "state": "enabled", 00:20:30.996 "thread": "nvmf_tgt_poll_group_000", 00:20:30.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:30.996 "listen_address": { 00:20:30.996 "trtype": "TCP", 00:20:30.996 "adrfam": "IPv4", 00:20:30.996 "traddr": "10.0.0.2", 00:20:30.996 "trsvcid": "4420" 00:20:30.996 }, 00:20:30.996 "peer_address": { 00:20:30.996 "trtype": "TCP", 00:20:30.996 "adrfam": "IPv4", 00:20:30.996 "traddr": "10.0.0.1", 00:20:30.996 "trsvcid": "54154" 00:20:30.996 }, 00:20:30.996 "auth": { 00:20:30.996 "state": "completed", 00:20:30.996 "digest": "sha256", 00:20:30.996 "dhgroup": "ffdhe2048" 00:20:30.996 } 00:20:30.996 } 00:20:30.996 ]' 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.996 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.257 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:31.257 15:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.826 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.086 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.087 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.346 00:20:32.346 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.346 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.346 15:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.606 { 00:20:32.606 "cntlid": 11, 00:20:32.606 "qid": 0, 00:20:32.606 "state": "enabled", 00:20:32.606 "thread": "nvmf_tgt_poll_group_000", 00:20:32.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:32.606 "listen_address": { 00:20:32.606 "trtype": "TCP", 00:20:32.606 "adrfam": "IPv4", 00:20:32.606 "traddr": "10.0.0.2", 00:20:32.606 "trsvcid": "4420" 00:20:32.606 }, 00:20:32.606 "peer_address": { 00:20:32.606 "trtype": "TCP", 00:20:32.606 "adrfam": "IPv4", 00:20:32.606 "traddr": "10.0.0.1", 00:20:32.606 "trsvcid": "42808" 00:20:32.606 }, 00:20:32.606 "auth": { 00:20:32.606 "state": "completed", 00:20:32.606 "digest": "sha256", 00:20:32.606 "dhgroup": "ffdhe2048" 00:20:32.606 } 00:20:32.606 } 00:20:32.606 ]' 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.606 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.866 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:32.866 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.435 15:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.694 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.953 00:20:33.953 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.953 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.953 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.213 { 00:20:34.213 "cntlid": 13, 00:20:34.213 "qid": 0, 00:20:34.213 "state": "enabled", 00:20:34.213 "thread": "nvmf_tgt_poll_group_000", 00:20:34.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:34.213 "listen_address": { 00:20:34.213 "trtype": "TCP", 00:20:34.213 "adrfam": "IPv4", 00:20:34.213 "traddr": "10.0.0.2", 00:20:34.213 "trsvcid": "4420" 00:20:34.213 }, 00:20:34.213 "peer_address": { 00:20:34.213 "trtype": "TCP", 00:20:34.213 "adrfam": "IPv4", 00:20:34.213 "traddr": "10.0.0.1", 00:20:34.213 "trsvcid": "42828" 00:20:34.213 }, 00:20:34.213 "auth": { 00:20:34.213 "state": "completed", 00:20:34.213 "digest": "sha256", 00:20:34.213 "dhgroup": "ffdhe2048" 00:20:34.213 } 00:20:34.213 } 00:20:34.213 ]' 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.213 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.214 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.214 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.473 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:34.473 15:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.042 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.302 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.562 00:20:35.562 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.562 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.562 15:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.562 { 00:20:35.562 "cntlid": 15, 00:20:35.562 "qid": 0, 00:20:35.562 "state": "enabled", 00:20:35.562 "thread": "nvmf_tgt_poll_group_000", 00:20:35.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:35.562 "listen_address": { 00:20:35.562 "trtype": "TCP", 00:20:35.562 "adrfam": "IPv4", 00:20:35.562 "traddr": "10.0.0.2", 00:20:35.562 "trsvcid": "4420" 00:20:35.562 }, 00:20:35.562 "peer_address": { 00:20:35.562 "trtype": "TCP", 00:20:35.562 "adrfam": "IPv4", 00:20:35.562 "traddr": "10.0.0.1", 00:20:35.562 "trsvcid": "42852" 00:20:35.562 }, 00:20:35.562 "auth": { 00:20:35.562 "state": "completed", 00:20:35.562 "digest": "sha256", 00:20:35.562 "dhgroup": "ffdhe2048" 00:20:35.562 } 00:20:35.562 } 00:20:35.562 ]' 00:20:35.562 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.822 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.081 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:36.081 15:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.652 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.912 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.171 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.171 { 00:20:37.171 "cntlid": 17, 00:20:37.171 "qid": 0, 00:20:37.171 "state": "enabled", 00:20:37.171 "thread": "nvmf_tgt_poll_group_000", 00:20:37.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:37.171 "listen_address": { 00:20:37.171 "trtype": "TCP", 00:20:37.171 "adrfam": "IPv4", 00:20:37.171 "traddr": "10.0.0.2", 00:20:37.171 "trsvcid": "4420" 00:20:37.171 }, 00:20:37.171 "peer_address": { 00:20:37.171 "trtype": "TCP", 00:20:37.171 "adrfam": "IPv4", 00:20:37.171 "traddr": "10.0.0.1", 00:20:37.171 "trsvcid": "42890" 00:20:37.171 }, 00:20:37.171 "auth": { 00:20:37.171 "state": "completed", 00:20:37.171 "digest": "sha256", 00:20:37.171 "dhgroup": "ffdhe3072" 00:20:37.171 } 00:20:37.171 } 00:20:37.171 ]' 00:20:37.171 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.431 15:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.690 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:37.690 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.259 15:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.518 00:20:38.518 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.518 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.518 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.778 { 00:20:38.778 "cntlid": 19, 00:20:38.778 "qid": 0, 00:20:38.778 "state": "enabled", 00:20:38.778 "thread": "nvmf_tgt_poll_group_000", 00:20:38.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:38.778 "listen_address": { 00:20:38.778 "trtype": "TCP", 00:20:38.778 "adrfam": "IPv4", 00:20:38.778 "traddr": "10.0.0.2", 00:20:38.778 "trsvcid": "4420" 00:20:38.778 }, 00:20:38.778 "peer_address": { 00:20:38.778 "trtype": "TCP", 00:20:38.778 "adrfam": "IPv4", 00:20:38.778 "traddr": "10.0.0.1", 00:20:38.778 "trsvcid": "42916" 00:20:38.778 }, 00:20:38.778 "auth": { 00:20:38.778 "state": "completed", 00:20:38.778 "digest": "sha256", 00:20:38.778 "dhgroup": "ffdhe3072" 00:20:38.778 } 00:20:38.778 } 00:20:38.778 ]' 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.778 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:39.037 15:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:39.606 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.867 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.126 00:20:40.126 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.126 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.126 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.385 { 00:20:40.385 "cntlid": 21, 00:20:40.385 "qid": 0, 00:20:40.385 "state": "enabled", 00:20:40.385 "thread": "nvmf_tgt_poll_group_000", 00:20:40.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.385 "listen_address": { 00:20:40.385 "trtype": "TCP", 00:20:40.385 "adrfam": "IPv4", 00:20:40.385 "traddr": "10.0.0.2", 00:20:40.385 "trsvcid": "4420" 00:20:40.385 }, 00:20:40.385 "peer_address": { 00:20:40.385 "trtype": "TCP", 00:20:40.385 "adrfam": "IPv4", 00:20:40.385 "traddr": "10.0.0.1", 00:20:40.385 "trsvcid": "42948" 00:20:40.385 }, 00:20:40.385 "auth": { 00:20:40.385 "state": "completed", 00:20:40.385 "digest": "sha256", 00:20:40.385 "dhgroup": "ffdhe3072" 00:20:40.385 } 00:20:40.385 } 00:20:40.385 ]' 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.385 15:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.644 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.644 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.644 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:40.645 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.213 15:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.476 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.776 00:20:41.776 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.776 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.776 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.061 { 00:20:42.061 "cntlid": 23, 00:20:42.061 "qid": 0, 00:20:42.061 "state": "enabled", 00:20:42.061 "thread": "nvmf_tgt_poll_group_000", 00:20:42.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:42.061 "listen_address": { 00:20:42.061 "trtype": "TCP", 00:20:42.061 "adrfam": "IPv4", 00:20:42.061 "traddr": "10.0.0.2", 00:20:42.061 "trsvcid": "4420" 00:20:42.061 }, 00:20:42.061 "peer_address": { 00:20:42.061 "trtype": "TCP", 00:20:42.061 "adrfam": "IPv4", 00:20:42.061 "traddr": "10.0.0.1", 00:20:42.061 "trsvcid": "42982" 00:20:42.061 }, 00:20:42.061 "auth": { 00:20:42.061 "state": "completed", 00:20:42.061 "digest": "sha256", 00:20:42.061 "dhgroup": "ffdhe3072" 00:20:42.061 } 00:20:42.061 } 00:20:42.061 ]' 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.061 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.344 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:42.344 15:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.929 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.189 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.447 00:20:43.447 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.447 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.447 15:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.707 { 00:20:43.707 "cntlid": 25, 00:20:43.707 "qid": 0, 00:20:43.707 "state": "enabled", 00:20:43.707 "thread": "nvmf_tgt_poll_group_000", 00:20:43.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:43.707 "listen_address": { 00:20:43.707 "trtype": "TCP", 00:20:43.707 "adrfam": "IPv4", 00:20:43.707 "traddr": "10.0.0.2", 00:20:43.707 "trsvcid": "4420" 00:20:43.707 }, 00:20:43.707 "peer_address": { 00:20:43.707 "trtype": "TCP", 00:20:43.707 "adrfam": "IPv4", 00:20:43.707 "traddr": "10.0.0.1", 00:20:43.707 "trsvcid": "58406" 00:20:43.707 }, 00:20:43.707 "auth": { 00:20:43.707 "state": "completed", 00:20:43.707 "digest": "sha256", 00:20:43.707 "dhgroup": "ffdhe4096" 00:20:43.707 } 00:20:43.707 } 00:20:43.707 ]' 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.707 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.966 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:43.966 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:44.534 15:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.535 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.794 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.053 00:20:45.053 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.053 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.053 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.311 { 00:20:45.311 "cntlid": 27, 00:20:45.311 "qid": 0, 00:20:45.311 "state": "enabled", 00:20:45.311 "thread": "nvmf_tgt_poll_group_000", 00:20:45.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.311 "listen_address": { 00:20:45.311 "trtype": "TCP", 00:20:45.311 "adrfam": "IPv4", 00:20:45.311 "traddr": "10.0.0.2", 00:20:45.311 "trsvcid": "4420" 00:20:45.311 }, 00:20:45.311 "peer_address": { 00:20:45.311 "trtype": "TCP", 00:20:45.311 "adrfam": "IPv4", 00:20:45.311 "traddr": "10.0.0.1", 00:20:45.311 "trsvcid": "58426" 00:20:45.311 }, 00:20:45.311 "auth": { 00:20:45.311 "state": "completed", 00:20:45.311 "digest": "sha256", 00:20:45.311 "dhgroup": "ffdhe4096" 00:20:45.311 } 00:20:45.311 } 00:20:45.311 ]' 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.311 15:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.570 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:45.570 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.139 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.398 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.399 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.399 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.399 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.399 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.399 15:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.658 00:20:46.658 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.658 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.658 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.917 { 00:20:46.917 "cntlid": 29, 00:20:46.917 "qid": 0, 00:20:46.917 "state": "enabled", 00:20:46.917 "thread": "nvmf_tgt_poll_group_000", 00:20:46.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:46.917 "listen_address": { 00:20:46.917 "trtype": "TCP", 00:20:46.917 "adrfam": "IPv4", 00:20:46.917 "traddr": "10.0.0.2", 00:20:46.917 "trsvcid": "4420" 00:20:46.917 }, 00:20:46.917 "peer_address": { 00:20:46.917 "trtype": "TCP", 00:20:46.917 "adrfam": "IPv4", 00:20:46.917 "traddr": "10.0.0.1", 00:20:46.917 "trsvcid": "58454" 00:20:46.917 }, 00:20:46.917 "auth": { 00:20:46.917 "state": "completed", 00:20:46.917 "digest": "sha256", 00:20:46.917 "dhgroup": "ffdhe4096" 00:20:46.917 } 00:20:46.917 } 00:20:46.917 ]' 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.917 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.176 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:47.176 15:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.746 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.006 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.265 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.265 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.524 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.524 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.524 { 00:20:48.524 "cntlid": 31, 00:20:48.524 "qid": 0, 00:20:48.524 "state": "enabled", 00:20:48.524 "thread": "nvmf_tgt_poll_group_000", 00:20:48.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:48.524 "listen_address": { 00:20:48.524 "trtype": "TCP", 00:20:48.524 "adrfam": "IPv4", 00:20:48.524 "traddr": "10.0.0.2", 00:20:48.524 "trsvcid": "4420" 00:20:48.524 }, 00:20:48.524 "peer_address": { 00:20:48.524 "trtype": "TCP", 00:20:48.524 "adrfam": "IPv4", 00:20:48.524 "traddr": "10.0.0.1", 00:20:48.524 "trsvcid": "58482" 00:20:48.524 }, 00:20:48.524 "auth": { 00:20:48.524 "state": "completed", 00:20:48.524 "digest": "sha256", 00:20:48.524 "dhgroup": "ffdhe4096" 00:20:48.524 } 00:20:48.525 } 00:20:48.525 ]' 00:20:48.525 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.525 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.525 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.525 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.525 15:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.525 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.525 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.525 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.784 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:48.784 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.352 15:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.611 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.870 00:20:49.870 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.870 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.870 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.129 { 00:20:50.129 "cntlid": 33, 00:20:50.129 "qid": 0, 00:20:50.129 "state": "enabled", 00:20:50.129 "thread": "nvmf_tgt_poll_group_000", 00:20:50.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:50.129 "listen_address": { 00:20:50.129 "trtype": "TCP", 00:20:50.129 "adrfam": "IPv4", 00:20:50.129 "traddr": "10.0.0.2", 00:20:50.129 "trsvcid": "4420" 00:20:50.129 }, 00:20:50.129 "peer_address": { 00:20:50.129 "trtype": "TCP", 00:20:50.129 "adrfam": "IPv4", 00:20:50.129 "traddr": "10.0.0.1", 00:20:50.129 "trsvcid": "58502" 00:20:50.129 }, 00:20:50.129 "auth": { 00:20:50.129 "state": "completed", 00:20:50.129 "digest": "sha256", 00:20:50.129 "dhgroup": "ffdhe6144" 00:20:50.129 } 00:20:50.129 } 00:20:50.129 ]' 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.129 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.388 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:50.388 15:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.957 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.958 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.217 15:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.476 00:20:51.476 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.476 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.476 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.735 { 00:20:51.735 "cntlid": 35, 00:20:51.735 "qid": 0, 00:20:51.735 "state": "enabled", 00:20:51.735 "thread": "nvmf_tgt_poll_group_000", 00:20:51.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:51.735 "listen_address": { 00:20:51.735 "trtype": "TCP", 00:20:51.735 "adrfam": "IPv4", 00:20:51.735 "traddr": "10.0.0.2", 00:20:51.735 "trsvcid": "4420" 00:20:51.735 }, 00:20:51.735 "peer_address": { 00:20:51.735 "trtype": "TCP", 00:20:51.735 "adrfam": "IPv4", 00:20:51.735 "traddr": "10.0.0.1", 00:20:51.735 "trsvcid": "58528" 00:20:51.735 }, 00:20:51.735 "auth": { 00:20:51.735 "state": "completed", 00:20:51.735 "digest": "sha256", 00:20:51.735 "dhgroup": "ffdhe6144" 00:20:51.735 } 00:20:51.735 } 00:20:51.735 ]' 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.735 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.995 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.995 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.995 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.995 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:51.995 15:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.569 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.570 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.570 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.829 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.088 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.347 { 00:20:53.347 "cntlid": 37, 00:20:53.347 "qid": 0, 00:20:53.347 "state": "enabled", 00:20:53.347 "thread": "nvmf_tgt_poll_group_000", 00:20:53.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:53.347 "listen_address": { 00:20:53.347 "trtype": "TCP", 00:20:53.347 "adrfam": "IPv4", 00:20:53.347 "traddr": "10.0.0.2", 00:20:53.347 "trsvcid": "4420" 00:20:53.347 }, 00:20:53.347 "peer_address": { 00:20:53.347 "trtype": "TCP", 00:20:53.347 "adrfam": "IPv4", 00:20:53.347 "traddr": "10.0.0.1", 00:20:53.347 "trsvcid": "58900" 00:20:53.347 }, 00:20:53.347 "auth": { 00:20:53.347 "state": "completed", 00:20:53.347 "digest": "sha256", 00:20:53.347 "dhgroup": "ffdhe6144" 00:20:53.347 } 00:20:53.347 } 00:20:53.347 ]' 00:20:53.347 15:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.605 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.605 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.605 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.605 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.605 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.606 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.606 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.864 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:53.864 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:20:54.433 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.434 15:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.434 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.003 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.003 { 00:20:55.003 "cntlid": 39, 00:20:55.003 "qid": 0, 00:20:55.003 "state": "enabled", 00:20:55.003 "thread": "nvmf_tgt_poll_group_000", 00:20:55.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:55.003 "listen_address": { 00:20:55.003 "trtype": "TCP", 00:20:55.003 "adrfam": "IPv4", 00:20:55.003 "traddr": "10.0.0.2", 00:20:55.003 "trsvcid": "4420" 00:20:55.003 }, 00:20:55.003 "peer_address": { 00:20:55.003 "trtype": "TCP", 00:20:55.003 "adrfam": "IPv4", 00:20:55.003 "traddr": "10.0.0.1", 00:20:55.003 "trsvcid": "58914" 00:20:55.003 }, 00:20:55.003 "auth": { 00:20:55.003 "state": "completed", 00:20:55.003 "digest": "sha256", 00:20:55.003 "dhgroup": "ffdhe6144" 00:20:55.003 } 00:20:55.003 } 00:20:55.003 ]' 00:20:55.003 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.262 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.262 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.263 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.263 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.263 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.263 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.263 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.522 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:55.522 15:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:20:56.089 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.089 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.089 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.090 15:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.658 00:20:56.658 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.658 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.658 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.917 { 00:20:56.917 "cntlid": 41, 00:20:56.917 "qid": 0, 00:20:56.917 "state": "enabled", 00:20:56.917 "thread": "nvmf_tgt_poll_group_000", 00:20:56.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:56.917 "listen_address": { 00:20:56.917 "trtype": "TCP", 00:20:56.917 "adrfam": "IPv4", 00:20:56.917 "traddr": "10.0.0.2", 00:20:56.917 "trsvcid": "4420" 00:20:56.917 }, 00:20:56.917 "peer_address": { 00:20:56.917 "trtype": "TCP", 00:20:56.917 "adrfam": "IPv4", 00:20:56.917 "traddr": "10.0.0.1", 00:20:56.917 "trsvcid": "58942" 00:20:56.917 }, 00:20:56.917 "auth": { 00:20:56.917 "state": "completed", 00:20:56.917 "digest": "sha256", 00:20:56.917 "dhgroup": "ffdhe8192" 00:20:56.917 } 00:20:56.917 } 00:20:56.917 ]' 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.917 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.177 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:57.177 15:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.745 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.005 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.573 00:20:58.573 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.573 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.573 15:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.573 { 00:20:58.573 "cntlid": 43, 00:20:58.573 "qid": 0, 00:20:58.573 "state": "enabled", 00:20:58.573 "thread": "nvmf_tgt_poll_group_000", 00:20:58.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:58.573 "listen_address": { 00:20:58.573 "trtype": "TCP", 00:20:58.573 "adrfam": "IPv4", 00:20:58.573 "traddr": "10.0.0.2", 00:20:58.573 "trsvcid": "4420" 00:20:58.573 }, 00:20:58.573 "peer_address": { 00:20:58.573 "trtype": "TCP", 00:20:58.573 "adrfam": "IPv4", 00:20:58.573 "traddr": "10.0.0.1", 00:20:58.573 "trsvcid": "58956" 00:20:58.573 }, 00:20:58.573 "auth": { 00:20:58.573 "state": "completed", 00:20:58.573 "digest": "sha256", 00:20:58.573 "dhgroup": "ffdhe8192" 00:20:58.573 } 00:20:58.573 } 00:20:58.573 ]' 00:20:58.573 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.832 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.092 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:59.092 15:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.661 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.229 00:21:00.229 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.229 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.229 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.488 { 00:21:00.488 "cntlid": 45, 00:21:00.488 "qid": 0, 00:21:00.488 "state": "enabled", 00:21:00.488 "thread": "nvmf_tgt_poll_group_000", 00:21:00.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:00.488 "listen_address": { 00:21:00.488 "trtype": "TCP", 00:21:00.488 "adrfam": "IPv4", 00:21:00.488 "traddr": "10.0.0.2", 00:21:00.488 "trsvcid": "4420" 00:21:00.488 }, 00:21:00.488 "peer_address": { 00:21:00.488 "trtype": "TCP", 00:21:00.488 "adrfam": "IPv4", 00:21:00.488 "traddr": "10.0.0.1", 00:21:00.488 "trsvcid": "58986" 00:21:00.488 }, 00:21:00.488 "auth": { 00:21:00.488 "state": "completed", 00:21:00.488 "digest": "sha256", 00:21:00.488 "dhgroup": "ffdhe8192" 00:21:00.488 } 00:21:00.488 } 00:21:00.488 ]' 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.488 15:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.488 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.488 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.488 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.488 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.488 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.747 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:00.747 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.315 15:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.574 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.143 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.143 { 00:21:02.143 "cntlid": 47, 00:21:02.143 "qid": 0, 00:21:02.143 "state": "enabled", 00:21:02.143 "thread": "nvmf_tgt_poll_group_000", 00:21:02.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:02.143 "listen_address": { 00:21:02.143 "trtype": "TCP", 00:21:02.143 "adrfam": "IPv4", 00:21:02.143 "traddr": "10.0.0.2", 00:21:02.143 "trsvcid": "4420" 00:21:02.143 }, 00:21:02.143 "peer_address": { 00:21:02.143 "trtype": "TCP", 00:21:02.143 "adrfam": "IPv4", 00:21:02.143 "traddr": "10.0.0.1", 00:21:02.143 "trsvcid": "59014" 00:21:02.143 }, 00:21:02.143 "auth": { 00:21:02.143 "state": "completed", 00:21:02.143 "digest": "sha256", 00:21:02.143 "dhgroup": "ffdhe8192" 00:21:02.143 } 00:21:02.143 } 00:21:02.143 ]' 00:21:02.143 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.402 15:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.660 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:02.660 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.228 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.488 15:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.748 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.748 { 00:21:03.748 "cntlid": 49, 00:21:03.748 "qid": 0, 00:21:03.748 "state": "enabled", 00:21:03.748 "thread": "nvmf_tgt_poll_group_000", 00:21:03.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:03.748 "listen_address": { 00:21:03.748 "trtype": "TCP", 00:21:03.748 "adrfam": "IPv4", 00:21:03.748 "traddr": "10.0.0.2", 00:21:03.748 "trsvcid": "4420" 00:21:03.748 }, 00:21:03.748 "peer_address": { 00:21:03.748 "trtype": "TCP", 00:21:03.748 "adrfam": "IPv4", 00:21:03.748 "traddr": "10.0.0.1", 00:21:03.748 "trsvcid": "51500" 00:21:03.748 }, 00:21:03.748 "auth": { 00:21:03.748 "state": "completed", 00:21:03.748 "digest": "sha384", 00:21:03.748 "dhgroup": "null" 00:21:03.748 } 00:21:03.748 } 00:21:03.748 ]' 00:21:03.748 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.007 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.266 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:04.266 15:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.835 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.097 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.097 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.362 { 00:21:05.362 "cntlid": 51, 00:21:05.362 "qid": 0, 00:21:05.362 "state": "enabled", 00:21:05.362 "thread": "nvmf_tgt_poll_group_000", 00:21:05.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:05.362 "listen_address": { 00:21:05.362 "trtype": "TCP", 00:21:05.362 "adrfam": "IPv4", 00:21:05.362 "traddr": "10.0.0.2", 00:21:05.362 "trsvcid": "4420" 00:21:05.362 }, 00:21:05.362 "peer_address": { 00:21:05.362 "trtype": "TCP", 00:21:05.362 "adrfam": "IPv4", 00:21:05.362 "traddr": "10.0.0.1", 00:21:05.362 "trsvcid": "51540" 00:21:05.362 }, 00:21:05.362 "auth": { 00:21:05.362 "state": "completed", 00:21:05.362 "digest": "sha384", 00:21:05.362 "dhgroup": "null" 00:21:05.362 } 00:21:05.362 } 00:21:05.362 ]' 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.362 15:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.622 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.622 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.622 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.622 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.622 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.881 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:05.881 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.479 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.480 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.480 15:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.480 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.739 00:21:06.739 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.739 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.739 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.998 { 00:21:06.998 "cntlid": 53, 00:21:06.998 "qid": 0, 00:21:06.998 "state": "enabled", 00:21:06.998 "thread": "nvmf_tgt_poll_group_000", 00:21:06.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:06.998 "listen_address": { 00:21:06.998 "trtype": "TCP", 00:21:06.998 "adrfam": "IPv4", 00:21:06.998 "traddr": "10.0.0.2", 00:21:06.998 "trsvcid": "4420" 00:21:06.998 }, 00:21:06.998 "peer_address": { 00:21:06.998 "trtype": "TCP", 00:21:06.998 "adrfam": "IPv4", 00:21:06.998 "traddr": "10.0.0.1", 00:21:06.998 "trsvcid": "51572" 00:21:06.998 }, 00:21:06.998 "auth": { 00:21:06.998 "state": "completed", 00:21:06.998 "digest": "sha384", 00:21:06.998 "dhgroup": "null" 00:21:06.998 } 00:21:06.998 } 00:21:06.998 ]' 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.998 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.257 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:07.257 15:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.825 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.084 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.343 00:21:08.343 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.343 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.343 15:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.602 { 00:21:08.602 "cntlid": 55, 00:21:08.602 "qid": 0, 00:21:08.602 "state": "enabled", 00:21:08.602 "thread": "nvmf_tgt_poll_group_000", 00:21:08.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:08.602 "listen_address": { 00:21:08.602 "trtype": "TCP", 00:21:08.602 "adrfam": "IPv4", 00:21:08.602 "traddr": "10.0.0.2", 00:21:08.602 "trsvcid": "4420" 00:21:08.602 }, 00:21:08.602 "peer_address": { 00:21:08.602 "trtype": "TCP", 00:21:08.602 "adrfam": "IPv4", 00:21:08.602 "traddr": "10.0.0.1", 00:21:08.602 "trsvcid": "51588" 00:21:08.602 }, 00:21:08.602 "auth": { 00:21:08.602 "state": "completed", 00:21:08.602 "digest": "sha384", 00:21:08.602 "dhgroup": "null" 00:21:08.602 } 00:21:08.602 } 00:21:08.602 ]' 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.602 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.861 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:08.861 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.429 15:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.688 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.947 00:21:09.947 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.947 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.947 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.206 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.206 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.206 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.206 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.207 { 00:21:10.207 "cntlid": 57, 00:21:10.207 "qid": 0, 00:21:10.207 "state": "enabled", 00:21:10.207 "thread": "nvmf_tgt_poll_group_000", 00:21:10.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:10.207 "listen_address": { 00:21:10.207 "trtype": "TCP", 00:21:10.207 "adrfam": "IPv4", 00:21:10.207 "traddr": "10.0.0.2", 00:21:10.207 "trsvcid": "4420" 00:21:10.207 }, 00:21:10.207 "peer_address": { 00:21:10.207 "trtype": "TCP", 00:21:10.207 "adrfam": "IPv4", 00:21:10.207 "traddr": "10.0.0.1", 00:21:10.207 "trsvcid": "51620" 00:21:10.207 }, 00:21:10.207 "auth": { 00:21:10.207 "state": "completed", 00:21:10.207 "digest": "sha384", 00:21:10.207 "dhgroup": "ffdhe2048" 00:21:10.207 } 00:21:10.207 } 00:21:10.207 ]' 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.207 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.466 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:10.466 15:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.034 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.293 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.552 00:21:11.552 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.552 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.552 15:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.811 { 00:21:11.811 "cntlid": 59, 00:21:11.811 "qid": 0, 00:21:11.811 "state": "enabled", 00:21:11.811 "thread": "nvmf_tgt_poll_group_000", 00:21:11.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:11.811 "listen_address": { 00:21:11.811 "trtype": "TCP", 00:21:11.811 "adrfam": "IPv4", 00:21:11.811 "traddr": "10.0.0.2", 00:21:11.811 "trsvcid": "4420" 00:21:11.811 }, 00:21:11.811 "peer_address": { 00:21:11.811 "trtype": "TCP", 00:21:11.811 "adrfam": "IPv4", 00:21:11.811 "traddr": "10.0.0.1", 00:21:11.811 "trsvcid": "51656" 00:21:11.811 }, 00:21:11.811 "auth": { 00:21:11.811 "state": "completed", 00:21:11.811 "digest": "sha384", 00:21:11.811 "dhgroup": "ffdhe2048" 00:21:11.811 } 00:21:11.811 } 00:21:11.811 ]' 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.811 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.070 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:12.070 15:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.637 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.896 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.897 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.156 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.156 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.156 { 00:21:13.156 "cntlid": 61, 00:21:13.156 "qid": 0, 00:21:13.156 "state": "enabled", 00:21:13.156 "thread": "nvmf_tgt_poll_group_000", 00:21:13.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:13.156 "listen_address": { 00:21:13.156 "trtype": "TCP", 00:21:13.156 "adrfam": "IPv4", 00:21:13.156 "traddr": "10.0.0.2", 00:21:13.156 "trsvcid": "4420" 00:21:13.156 }, 00:21:13.156 "peer_address": { 00:21:13.156 "trtype": "TCP", 00:21:13.156 "adrfam": "IPv4", 00:21:13.156 "traddr": "10.0.0.1", 00:21:13.156 "trsvcid": "49324" 00:21:13.156 }, 00:21:13.156 "auth": { 00:21:13.156 "state": "completed", 00:21:13.156 "digest": "sha384", 00:21:13.156 "dhgroup": "ffdhe2048" 00:21:13.156 } 00:21:13.156 } 00:21:13.156 ]' 00:21:13.157 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.416 15:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.675 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:13.675 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.244 15:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.503 00:21:14.503 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.503 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.503 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.761 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.762 { 00:21:14.762 "cntlid": 63, 00:21:14.762 "qid": 0, 00:21:14.762 "state": "enabled", 00:21:14.762 "thread": "nvmf_tgt_poll_group_000", 00:21:14.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:14.762 "listen_address": { 00:21:14.762 "trtype": "TCP", 00:21:14.762 "adrfam": "IPv4", 00:21:14.762 "traddr": "10.0.0.2", 00:21:14.762 "trsvcid": "4420" 00:21:14.762 }, 00:21:14.762 "peer_address": { 00:21:14.762 "trtype": "TCP", 00:21:14.762 "adrfam": "IPv4", 00:21:14.762 "traddr": "10.0.0.1", 00:21:14.762 "trsvcid": "49342" 00:21:14.762 }, 00:21:14.762 "auth": { 00:21:14.762 "state": "completed", 00:21:14.762 "digest": "sha384", 00:21:14.762 "dhgroup": "ffdhe2048" 00:21:14.762 } 00:21:14.762 } 00:21:14.762 ]' 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.762 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:15.020 15:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.587 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.846 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.104 00:21:16.105 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.105 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.105 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.363 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.363 { 00:21:16.363 "cntlid": 65, 00:21:16.363 "qid": 0, 00:21:16.363 "state": "enabled", 00:21:16.363 "thread": "nvmf_tgt_poll_group_000", 00:21:16.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:16.363 "listen_address": { 00:21:16.363 "trtype": "TCP", 00:21:16.363 "adrfam": "IPv4", 00:21:16.363 "traddr": "10.0.0.2", 00:21:16.363 "trsvcid": "4420" 00:21:16.363 }, 00:21:16.363 "peer_address": { 00:21:16.363 "trtype": "TCP", 00:21:16.363 "adrfam": "IPv4", 00:21:16.364 "traddr": "10.0.0.1", 00:21:16.364 "trsvcid": "49356" 00:21:16.364 }, 00:21:16.364 "auth": { 00:21:16.364 "state": "completed", 00:21:16.364 "digest": "sha384", 00:21:16.364 "dhgroup": "ffdhe3072" 00:21:16.364 } 00:21:16.364 } 00:21:16.364 ]' 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.364 15:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.622 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:16.622 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.190 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.448 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.449 15:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.707 00:21:17.707 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.707 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.707 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.965 { 00:21:17.965 "cntlid": 67, 00:21:17.965 "qid": 0, 00:21:17.965 "state": "enabled", 00:21:17.965 "thread": "nvmf_tgt_poll_group_000", 00:21:17.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:17.965 "listen_address": { 00:21:17.965 "trtype": "TCP", 00:21:17.965 "adrfam": "IPv4", 00:21:17.965 "traddr": "10.0.0.2", 00:21:17.965 "trsvcid": "4420" 00:21:17.965 }, 00:21:17.965 "peer_address": { 00:21:17.965 "trtype": "TCP", 00:21:17.965 "adrfam": "IPv4", 00:21:17.965 "traddr": "10.0.0.1", 00:21:17.965 "trsvcid": "49390" 00:21:17.965 }, 00:21:17.965 "auth": { 00:21:17.965 "state": "completed", 00:21:17.965 "digest": "sha384", 00:21:17.965 "dhgroup": "ffdhe3072" 00:21:17.965 } 00:21:17.965 } 00:21:17.965 ]' 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.965 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.224 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:18.224 15:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.798 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.056 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.315 00:21:19.315 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.315 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.315 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.573 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.573 { 00:21:19.573 "cntlid": 69, 00:21:19.573 "qid": 0, 00:21:19.573 "state": "enabled", 00:21:19.573 "thread": "nvmf_tgt_poll_group_000", 00:21:19.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:19.573 "listen_address": { 00:21:19.573 "trtype": "TCP", 00:21:19.573 "adrfam": "IPv4", 00:21:19.573 "traddr": "10.0.0.2", 00:21:19.573 "trsvcid": "4420" 00:21:19.573 }, 00:21:19.574 "peer_address": { 00:21:19.574 "trtype": "TCP", 00:21:19.574 "adrfam": "IPv4", 00:21:19.574 "traddr": "10.0.0.1", 00:21:19.574 "trsvcid": "49414" 00:21:19.574 }, 00:21:19.574 "auth": { 00:21:19.574 "state": "completed", 00:21:19.574 "digest": "sha384", 00:21:19.574 "dhgroup": "ffdhe3072" 00:21:19.574 } 00:21:19.574 } 00:21:19.574 ]' 00:21:19.574 15:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.574 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.832 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:19.832 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.400 15:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.658 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.916 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.916 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.175 { 00:21:21.175 "cntlid": 71, 00:21:21.175 "qid": 0, 00:21:21.175 "state": "enabled", 00:21:21.175 "thread": "nvmf_tgt_poll_group_000", 00:21:21.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:21.175 "listen_address": { 00:21:21.175 "trtype": "TCP", 00:21:21.175 "adrfam": "IPv4", 00:21:21.175 "traddr": "10.0.0.2", 00:21:21.175 "trsvcid": "4420" 00:21:21.175 }, 00:21:21.175 "peer_address": { 00:21:21.175 "trtype": "TCP", 00:21:21.175 "adrfam": "IPv4", 00:21:21.175 "traddr": "10.0.0.1", 00:21:21.175 "trsvcid": "49442" 00:21:21.175 }, 00:21:21.175 "auth": { 00:21:21.175 "state": "completed", 00:21:21.175 "digest": "sha384", 00:21:21.175 "dhgroup": "ffdhe3072" 00:21:21.175 } 00:21:21.175 } 00:21:21.175 ]' 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.175 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.434 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:21.434 15:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.001 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.261 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.519 00:21:22.519 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.519 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.519 15:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.519 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.519 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.519 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.519 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.778 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.778 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.778 { 00:21:22.778 "cntlid": 73, 00:21:22.778 "qid": 0, 00:21:22.778 "state": "enabled", 00:21:22.778 "thread": "nvmf_tgt_poll_group_000", 00:21:22.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:22.778 "listen_address": { 00:21:22.778 "trtype": "TCP", 00:21:22.778 "adrfam": "IPv4", 00:21:22.778 "traddr": "10.0.0.2", 00:21:22.778 "trsvcid": "4420" 00:21:22.778 }, 00:21:22.778 "peer_address": { 00:21:22.778 "trtype": "TCP", 00:21:22.778 "adrfam": "IPv4", 00:21:22.778 "traddr": "10.0.0.1", 00:21:22.779 "trsvcid": "40962" 00:21:22.779 }, 00:21:22.779 "auth": { 00:21:22.779 "state": "completed", 00:21:22.779 "digest": "sha384", 00:21:22.779 "dhgroup": "ffdhe4096" 00:21:22.779 } 00:21:22.779 } 00:21:22.779 ]' 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.779 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.038 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:23.038 15:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:23.604 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.604 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:23.604 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.604 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.605 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.605 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.605 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.605 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.864 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.123 00:21:24.123 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.123 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.123 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.382 { 00:21:24.382 "cntlid": 75, 00:21:24.382 "qid": 0, 00:21:24.382 "state": "enabled", 00:21:24.382 "thread": "nvmf_tgt_poll_group_000", 00:21:24.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:24.382 "listen_address": { 00:21:24.382 "trtype": "TCP", 00:21:24.382 "adrfam": "IPv4", 00:21:24.382 "traddr": "10.0.0.2", 00:21:24.382 "trsvcid": "4420" 00:21:24.382 }, 00:21:24.382 "peer_address": { 00:21:24.382 "trtype": "TCP", 00:21:24.382 "adrfam": "IPv4", 00:21:24.382 "traddr": "10.0.0.1", 00:21:24.382 "trsvcid": "40992" 00:21:24.382 }, 00:21:24.382 "auth": { 00:21:24.382 "state": "completed", 00:21:24.382 "digest": "sha384", 00:21:24.382 "dhgroup": "ffdhe4096" 00:21:24.382 } 00:21:24.382 } 00:21:24.382 ]' 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.382 15:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.640 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:24.640 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.322 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.323 15:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.581 00:21:25.581 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.581 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.581 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.840 { 00:21:25.840 "cntlid": 77, 00:21:25.840 "qid": 0, 00:21:25.840 "state": "enabled", 00:21:25.840 "thread": "nvmf_tgt_poll_group_000", 00:21:25.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:25.840 "listen_address": { 00:21:25.840 "trtype": "TCP", 00:21:25.840 "adrfam": "IPv4", 00:21:25.840 "traddr": "10.0.0.2", 00:21:25.840 "trsvcid": "4420" 00:21:25.840 }, 00:21:25.840 "peer_address": { 00:21:25.840 "trtype": "TCP", 00:21:25.840 "adrfam": "IPv4", 00:21:25.840 "traddr": "10.0.0.1", 00:21:25.840 "trsvcid": "41030" 00:21:25.840 }, 00:21:25.840 "auth": { 00:21:25.840 "state": "completed", 00:21:25.840 "digest": "sha384", 00:21:25.840 "dhgroup": "ffdhe4096" 00:21:25.840 } 00:21:25.840 } 00:21:25.840 ]' 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.840 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.099 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.099 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.099 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.099 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:26.099 15:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.667 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.926 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.185 00:21:27.185 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.185 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.185 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.444 { 00:21:27.444 "cntlid": 79, 00:21:27.444 "qid": 0, 00:21:27.444 "state": "enabled", 00:21:27.444 "thread": "nvmf_tgt_poll_group_000", 00:21:27.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:27.444 "listen_address": { 00:21:27.444 "trtype": "TCP", 00:21:27.444 "adrfam": "IPv4", 00:21:27.444 "traddr": "10.0.0.2", 00:21:27.444 "trsvcid": "4420" 00:21:27.444 }, 00:21:27.444 "peer_address": { 00:21:27.444 "trtype": "TCP", 00:21:27.444 "adrfam": "IPv4", 00:21:27.444 "traddr": "10.0.0.1", 00:21:27.444 "trsvcid": "41042" 00:21:27.444 }, 00:21:27.444 "auth": { 00:21:27.444 "state": "completed", 00:21:27.444 "digest": "sha384", 00:21:27.444 "dhgroup": "ffdhe4096" 00:21:27.444 } 00:21:27.444 } 00:21:27.444 ]' 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.444 15:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.444 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.444 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.703 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.703 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.703 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.703 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:27.703 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.271 15:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.530 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.789 00:21:28.789 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.789 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.789 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.047 { 00:21:29.047 "cntlid": 81, 00:21:29.047 "qid": 0, 00:21:29.047 "state": "enabled", 00:21:29.047 "thread": "nvmf_tgt_poll_group_000", 00:21:29.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:29.047 "listen_address": { 00:21:29.047 "trtype": "TCP", 00:21:29.047 "adrfam": "IPv4", 00:21:29.047 "traddr": "10.0.0.2", 00:21:29.047 "trsvcid": "4420" 00:21:29.047 }, 00:21:29.047 "peer_address": { 00:21:29.047 "trtype": "TCP", 00:21:29.047 "adrfam": "IPv4", 00:21:29.047 "traddr": "10.0.0.1", 00:21:29.047 "trsvcid": "41078" 00:21:29.047 }, 00:21:29.047 "auth": { 00:21:29.047 "state": "completed", 00:21:29.047 "digest": "sha384", 00:21:29.047 "dhgroup": "ffdhe6144" 00:21:29.047 } 00:21:29.047 } 00:21:29.047 ]' 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.047 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.305 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.305 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.305 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.306 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:29.306 15:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.872 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.873 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.873 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.131 15:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.699 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.699 { 00:21:30.699 "cntlid": 83, 00:21:30.699 "qid": 0, 00:21:30.699 "state": "enabled", 00:21:30.699 "thread": "nvmf_tgt_poll_group_000", 00:21:30.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:30.699 "listen_address": { 00:21:30.699 "trtype": "TCP", 00:21:30.699 "adrfam": "IPv4", 00:21:30.699 "traddr": "10.0.0.2", 00:21:30.699 "trsvcid": "4420" 00:21:30.699 }, 00:21:30.699 "peer_address": { 00:21:30.699 "trtype": "TCP", 00:21:30.699 "adrfam": "IPv4", 00:21:30.699 "traddr": "10.0.0.1", 00:21:30.699 "trsvcid": "41098" 00:21:30.699 }, 00:21:30.699 "auth": { 00:21:30.699 "state": "completed", 00:21:30.699 "digest": "sha384", 00:21:30.699 "dhgroup": "ffdhe6144" 00:21:30.699 } 00:21:30.699 } 00:21:30.699 ]' 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.699 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:30.957 15:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:31.523 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.523 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.524 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.782 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.783 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.350 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.350 { 00:21:32.350 "cntlid": 85, 00:21:32.350 "qid": 0, 00:21:32.350 "state": "enabled", 00:21:32.350 "thread": "nvmf_tgt_poll_group_000", 00:21:32.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:32.350 "listen_address": { 00:21:32.350 "trtype": "TCP", 00:21:32.350 "adrfam": "IPv4", 00:21:32.350 "traddr": "10.0.0.2", 00:21:32.350 "trsvcid": "4420" 00:21:32.350 }, 00:21:32.350 "peer_address": { 00:21:32.350 "trtype": "TCP", 00:21:32.350 "adrfam": "IPv4", 00:21:32.350 "traddr": "10.0.0.1", 00:21:32.350 "trsvcid": "41128" 00:21:32.350 }, 00:21:32.350 "auth": { 00:21:32.350 "state": "completed", 00:21:32.350 "digest": "sha384", 00:21:32.350 "dhgroup": "ffdhe6144" 00:21:32.350 } 00:21:32.350 } 00:21:32.350 ]' 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.350 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.608 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.608 15:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.608 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.608 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.608 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.608 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:32.608 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:33.175 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.433 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.433 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.433 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.434 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.434 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.434 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:33.434 15:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.434 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.000 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.000 { 00:21:34.000 "cntlid": 87, 00:21:34.000 "qid": 0, 00:21:34.000 "state": "enabled", 00:21:34.000 "thread": "nvmf_tgt_poll_group_000", 00:21:34.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:34.000 "listen_address": { 00:21:34.000 "trtype": "TCP", 00:21:34.000 "adrfam": "IPv4", 00:21:34.000 "traddr": "10.0.0.2", 00:21:34.000 "trsvcid": "4420" 00:21:34.000 }, 00:21:34.000 "peer_address": { 00:21:34.000 "trtype": "TCP", 00:21:34.000 "adrfam": "IPv4", 00:21:34.000 "traddr": "10.0.0.1", 00:21:34.000 "trsvcid": "46620" 00:21:34.000 }, 00:21:34.000 "auth": { 00:21:34.000 "state": "completed", 00:21:34.000 "digest": "sha384", 00:21:34.000 "dhgroup": "ffdhe6144" 00:21:34.000 } 00:21:34.000 } 00:21:34.000 ]' 00:21:34.000 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.259 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.518 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:34.518 15:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.086 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.087 15:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.654 00:21:35.654 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.654 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.654 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.913 { 00:21:35.913 "cntlid": 89, 00:21:35.913 "qid": 0, 00:21:35.913 "state": "enabled", 00:21:35.913 "thread": "nvmf_tgt_poll_group_000", 00:21:35.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:35.913 "listen_address": { 00:21:35.913 "trtype": "TCP", 00:21:35.913 "adrfam": "IPv4", 00:21:35.913 "traddr": "10.0.0.2", 00:21:35.913 "trsvcid": "4420" 00:21:35.913 }, 00:21:35.913 "peer_address": { 00:21:35.913 "trtype": "TCP", 00:21:35.913 "adrfam": "IPv4", 00:21:35.913 "traddr": "10.0.0.1", 00:21:35.913 "trsvcid": "46642" 00:21:35.913 }, 00:21:35.913 "auth": { 00:21:35.913 "state": "completed", 00:21:35.913 "digest": "sha384", 00:21:35.913 "dhgroup": "ffdhe8192" 00:21:35.913 } 00:21:35.913 } 00:21:35.913 ]' 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.913 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.171 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.171 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:36.171 15:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.739 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.998 15:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.566 00:21:37.566 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.566 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.566 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.566 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.825 { 00:21:37.825 "cntlid": 91, 00:21:37.825 "qid": 0, 00:21:37.825 "state": "enabled", 00:21:37.825 "thread": "nvmf_tgt_poll_group_000", 00:21:37.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:37.825 "listen_address": { 00:21:37.825 "trtype": "TCP", 00:21:37.825 "adrfam": "IPv4", 00:21:37.825 "traddr": "10.0.0.2", 00:21:37.825 "trsvcid": "4420" 00:21:37.825 }, 00:21:37.825 "peer_address": { 00:21:37.825 "trtype": "TCP", 00:21:37.825 "adrfam": "IPv4", 00:21:37.825 "traddr": "10.0.0.1", 00:21:37.825 "trsvcid": "46672" 00:21:37.825 }, 00:21:37.825 "auth": { 00:21:37.825 "state": "completed", 00:21:37.825 "digest": "sha384", 00:21:37.825 "dhgroup": "ffdhe8192" 00:21:37.825 } 00:21:37.825 } 00:21:37.825 ]' 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.825 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.083 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:38.083 15:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:38.650 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.909 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.478 00:21:39.478 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.478 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.478 15:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.478 { 00:21:39.478 "cntlid": 93, 00:21:39.478 "qid": 0, 00:21:39.478 "state": "enabled", 00:21:39.478 "thread": "nvmf_tgt_poll_group_000", 00:21:39.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:39.478 "listen_address": { 00:21:39.478 "trtype": "TCP", 00:21:39.478 "adrfam": "IPv4", 00:21:39.478 "traddr": "10.0.0.2", 00:21:39.478 "trsvcid": "4420" 00:21:39.478 }, 00:21:39.478 "peer_address": { 00:21:39.478 "trtype": "TCP", 00:21:39.478 "adrfam": "IPv4", 00:21:39.478 "traddr": "10.0.0.1", 00:21:39.478 "trsvcid": "46700" 00:21:39.478 }, 00:21:39.478 "auth": { 00:21:39.478 "state": "completed", 00:21:39.478 "digest": "sha384", 00:21:39.478 "dhgroup": "ffdhe8192" 00:21:39.478 } 00:21:39.478 } 00:21:39.478 ]' 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.478 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:39.737 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:40.305 15:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.565 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.132 00:21:41.132 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.132 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.132 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.391 { 00:21:41.391 "cntlid": 95, 00:21:41.391 "qid": 0, 00:21:41.391 "state": "enabled", 00:21:41.391 "thread": "nvmf_tgt_poll_group_000", 00:21:41.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:41.391 "listen_address": { 00:21:41.391 "trtype": "TCP", 00:21:41.391 "adrfam": "IPv4", 00:21:41.391 "traddr": "10.0.0.2", 00:21:41.391 "trsvcid": "4420" 00:21:41.391 }, 00:21:41.391 "peer_address": { 00:21:41.391 "trtype": "TCP", 00:21:41.391 "adrfam": "IPv4", 00:21:41.391 "traddr": "10.0.0.1", 00:21:41.391 "trsvcid": "46712" 00:21:41.391 }, 00:21:41.391 "auth": { 00:21:41.391 "state": "completed", 00:21:41.391 "digest": "sha384", 00:21:41.391 "dhgroup": "ffdhe8192" 00:21:41.391 } 00:21:41.391 } 00:21:41.391 ]' 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.391 15:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.649 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:41.649 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.217 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.476 15:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.735 00:21:42.735 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.735 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.735 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.993 { 00:21:42.993 "cntlid": 97, 00:21:42.993 "qid": 0, 00:21:42.993 "state": "enabled", 00:21:42.993 "thread": "nvmf_tgt_poll_group_000", 00:21:42.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:42.993 "listen_address": { 00:21:42.993 "trtype": "TCP", 00:21:42.993 "adrfam": "IPv4", 00:21:42.993 "traddr": "10.0.0.2", 00:21:42.993 "trsvcid": "4420" 00:21:42.993 }, 00:21:42.993 "peer_address": { 00:21:42.993 "trtype": "TCP", 00:21:42.993 "adrfam": "IPv4", 00:21:42.993 "traddr": "10.0.0.1", 00:21:42.993 "trsvcid": "35074" 00:21:42.993 }, 00:21:42.993 "auth": { 00:21:42.993 "state": "completed", 00:21:42.993 "digest": "sha512", 00:21:42.993 "dhgroup": "null" 00:21:42.993 } 00:21:42.993 } 00:21:42.993 ]' 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.993 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.994 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.252 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:43.252 15:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:43.820 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.079 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.338 00:21:44.338 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.338 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.338 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.597 { 00:21:44.597 "cntlid": 99, 00:21:44.597 "qid": 0, 00:21:44.597 "state": "enabled", 00:21:44.597 "thread": "nvmf_tgt_poll_group_000", 00:21:44.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:44.597 "listen_address": { 00:21:44.597 "trtype": "TCP", 00:21:44.597 "adrfam": "IPv4", 00:21:44.597 "traddr": "10.0.0.2", 00:21:44.597 "trsvcid": "4420" 00:21:44.597 }, 00:21:44.597 "peer_address": { 00:21:44.597 "trtype": "TCP", 00:21:44.597 "adrfam": "IPv4", 00:21:44.597 "traddr": "10.0.0.1", 00:21:44.597 "trsvcid": "35086" 00:21:44.597 }, 00:21:44.597 "auth": { 00:21:44.597 "state": "completed", 00:21:44.597 "digest": "sha512", 00:21:44.597 "dhgroup": "null" 00:21:44.597 } 00:21:44.597 } 00:21:44.597 ]' 00:21:44.597 15:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.597 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.856 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:44.856 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.423 15:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.682 00:21:45.682 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.941 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.941 { 00:21:45.941 "cntlid": 101, 00:21:45.941 "qid": 0, 00:21:45.941 "state": "enabled", 00:21:45.941 "thread": "nvmf_tgt_poll_group_000", 00:21:45.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:45.941 "listen_address": { 00:21:45.941 "trtype": "TCP", 00:21:45.941 "adrfam": "IPv4", 00:21:45.941 "traddr": "10.0.0.2", 00:21:45.941 "trsvcid": "4420" 00:21:45.941 }, 00:21:45.941 "peer_address": { 00:21:45.941 "trtype": "TCP", 00:21:45.941 "adrfam": "IPv4", 00:21:45.941 "traddr": "10.0.0.1", 00:21:45.941 "trsvcid": "35122" 00:21:45.941 }, 00:21:45.941 "auth": { 00:21:45.941 "state": "completed", 00:21:45.941 "digest": "sha512", 00:21:45.941 "dhgroup": "null" 00:21:45.941 } 00:21:45.942 } 00:21:45.942 ]' 00:21:45.942 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.942 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.942 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.203 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:46.203 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.203 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.204 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.204 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.204 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:46.204 15:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.770 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.030 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.289 00:21:47.289 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.289 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.289 15:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.548 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.548 { 00:21:47.548 "cntlid": 103, 00:21:47.548 "qid": 0, 00:21:47.548 "state": "enabled", 00:21:47.548 "thread": "nvmf_tgt_poll_group_000", 00:21:47.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:47.549 "listen_address": { 00:21:47.549 "trtype": "TCP", 00:21:47.549 "adrfam": "IPv4", 00:21:47.549 "traddr": "10.0.0.2", 00:21:47.549 "trsvcid": "4420" 00:21:47.549 }, 00:21:47.549 "peer_address": { 00:21:47.549 "trtype": "TCP", 00:21:47.549 "adrfam": "IPv4", 00:21:47.549 "traddr": "10.0.0.1", 00:21:47.549 "trsvcid": "35150" 00:21:47.549 }, 00:21:47.549 "auth": { 00:21:47.549 "state": "completed", 00:21:47.549 "digest": "sha512", 00:21:47.549 "dhgroup": "null" 00:21:47.549 } 00:21:47.549 } 00:21:47.549 ]' 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.549 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.807 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:47.807 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.374 15:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.633 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.893 00:21:48.893 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.893 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.893 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.152 { 00:21:49.152 "cntlid": 105, 00:21:49.152 "qid": 0, 00:21:49.152 "state": "enabled", 00:21:49.152 "thread": "nvmf_tgt_poll_group_000", 00:21:49.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:49.152 "listen_address": { 00:21:49.152 "trtype": "TCP", 00:21:49.152 "adrfam": "IPv4", 00:21:49.152 "traddr": "10.0.0.2", 00:21:49.152 "trsvcid": "4420" 00:21:49.152 }, 00:21:49.152 "peer_address": { 00:21:49.152 "trtype": "TCP", 00:21:49.152 "adrfam": "IPv4", 00:21:49.152 "traddr": "10.0.0.1", 00:21:49.152 "trsvcid": "35174" 00:21:49.152 }, 00:21:49.152 "auth": { 00:21:49.152 "state": "completed", 00:21:49.152 "digest": "sha512", 00:21:49.152 "dhgroup": "ffdhe2048" 00:21:49.152 } 00:21:49.152 } 00:21:49.152 ]' 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.152 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.411 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:49.411 15:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.979 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.239 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.498 00:21:50.498 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.498 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.498 15:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.756 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.757 { 00:21:50.757 "cntlid": 107, 00:21:50.757 "qid": 0, 00:21:50.757 "state": "enabled", 00:21:50.757 "thread": "nvmf_tgt_poll_group_000", 00:21:50.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:50.757 "listen_address": { 00:21:50.757 "trtype": "TCP", 00:21:50.757 "adrfam": "IPv4", 00:21:50.757 "traddr": "10.0.0.2", 00:21:50.757 "trsvcid": "4420" 00:21:50.757 }, 00:21:50.757 "peer_address": { 00:21:50.757 "trtype": "TCP", 00:21:50.757 "adrfam": "IPv4", 00:21:50.757 "traddr": "10.0.0.1", 00:21:50.757 "trsvcid": "35198" 00:21:50.757 }, 00:21:50.757 "auth": { 00:21:50.757 "state": "completed", 00:21:50.757 "digest": "sha512", 00:21:50.757 "dhgroup": "ffdhe2048" 00:21:50.757 } 00:21:50.757 } 00:21:50.757 ]' 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.757 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.016 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:51.016 15:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:51.583 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.583 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:51.583 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.583 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.584 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.584 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.584 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.584 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.843 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.103 00:21:52.103 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.103 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.103 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.362 { 00:21:52.362 "cntlid": 109, 00:21:52.362 "qid": 0, 00:21:52.362 "state": "enabled", 00:21:52.362 "thread": "nvmf_tgt_poll_group_000", 00:21:52.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:52.362 "listen_address": { 00:21:52.362 "trtype": "TCP", 00:21:52.362 "adrfam": "IPv4", 00:21:52.362 "traddr": "10.0.0.2", 00:21:52.362 "trsvcid": "4420" 00:21:52.362 }, 00:21:52.362 "peer_address": { 00:21:52.362 "trtype": "TCP", 00:21:52.362 "adrfam": "IPv4", 00:21:52.362 "traddr": "10.0.0.1", 00:21:52.362 "trsvcid": "35236" 00:21:52.362 }, 00:21:52.362 "auth": { 00:21:52.362 "state": "completed", 00:21:52.362 "digest": "sha512", 00:21:52.362 "dhgroup": "ffdhe2048" 00:21:52.362 } 00:21:52.362 } 00:21:52.362 ]' 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.362 15:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.620 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:52.620 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.188 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.448 15:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.707 00:21:53.707 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.707 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.707 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.969 { 00:21:53.969 "cntlid": 111, 00:21:53.969 "qid": 0, 00:21:53.969 "state": "enabled", 00:21:53.969 "thread": "nvmf_tgt_poll_group_000", 00:21:53.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:53.969 "listen_address": { 00:21:53.969 "trtype": "TCP", 00:21:53.969 "adrfam": "IPv4", 00:21:53.969 "traddr": "10.0.0.2", 00:21:53.969 "trsvcid": "4420" 00:21:53.969 }, 00:21:53.969 "peer_address": { 00:21:53.969 "trtype": "TCP", 00:21:53.969 "adrfam": "IPv4", 00:21:53.969 "traddr": "10.0.0.1", 00:21:53.969 "trsvcid": "54734" 00:21:53.969 }, 00:21:53.969 "auth": { 00:21:53.969 "state": "completed", 00:21:53.969 "digest": "sha512", 00:21:53.969 "dhgroup": "ffdhe2048" 00:21:53.969 } 00:21:53.969 } 00:21:53.969 ]' 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.969 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.228 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:54.229 15:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.797 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.056 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.315 00:21:55.315 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.315 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.315 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.574 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.574 { 00:21:55.575 "cntlid": 113, 00:21:55.575 "qid": 0, 00:21:55.575 "state": "enabled", 00:21:55.575 "thread": "nvmf_tgt_poll_group_000", 00:21:55.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:55.575 "listen_address": { 00:21:55.575 "trtype": "TCP", 00:21:55.575 "adrfam": "IPv4", 00:21:55.575 "traddr": "10.0.0.2", 00:21:55.575 "trsvcid": "4420" 00:21:55.575 }, 00:21:55.575 "peer_address": { 00:21:55.575 "trtype": "TCP", 00:21:55.575 "adrfam": "IPv4", 00:21:55.575 "traddr": "10.0.0.1", 00:21:55.575 "trsvcid": "54752" 00:21:55.575 }, 00:21:55.575 "auth": { 00:21:55.575 "state": "completed", 00:21:55.575 "digest": "sha512", 00:21:55.575 "dhgroup": "ffdhe3072" 00:21:55.575 } 00:21:55.575 } 00:21:55.575 ]' 00:21:55.575 15:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.575 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.834 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:55.834 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.403 15:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.663 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.922 00:21:56.923 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.923 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.923 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.182 { 00:21:57.182 "cntlid": 115, 00:21:57.182 "qid": 0, 00:21:57.182 "state": "enabled", 00:21:57.182 "thread": "nvmf_tgt_poll_group_000", 00:21:57.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:57.182 "listen_address": { 00:21:57.182 "trtype": "TCP", 00:21:57.182 "adrfam": "IPv4", 00:21:57.182 "traddr": "10.0.0.2", 00:21:57.182 "trsvcid": "4420" 00:21:57.182 }, 00:21:57.182 "peer_address": { 00:21:57.182 "trtype": "TCP", 00:21:57.182 "adrfam": "IPv4", 00:21:57.182 "traddr": "10.0.0.1", 00:21:57.182 "trsvcid": "54786" 00:21:57.182 }, 00:21:57.182 "auth": { 00:21:57.182 "state": "completed", 00:21:57.182 "digest": "sha512", 00:21:57.182 "dhgroup": "ffdhe3072" 00:21:57.182 } 00:21:57.182 } 00:21:57.182 ]' 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.182 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.441 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:57.441 15:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.009 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.010 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.010 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.269 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.527 00:21:58.527 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.527 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.527 15:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.786 { 00:21:58.786 "cntlid": 117, 00:21:58.786 "qid": 0, 00:21:58.786 "state": "enabled", 00:21:58.786 "thread": "nvmf_tgt_poll_group_000", 00:21:58.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:58.786 "listen_address": { 00:21:58.786 "trtype": "TCP", 00:21:58.786 "adrfam": "IPv4", 00:21:58.786 "traddr": "10.0.0.2", 00:21:58.786 "trsvcid": "4420" 00:21:58.786 }, 00:21:58.786 "peer_address": { 00:21:58.786 "trtype": "TCP", 00:21:58.786 "adrfam": "IPv4", 00:21:58.786 "traddr": "10.0.0.1", 00:21:58.786 "trsvcid": "54796" 00:21:58.786 }, 00:21:58.786 "auth": { 00:21:58.786 "state": "completed", 00:21:58.786 "digest": "sha512", 00:21:58.786 "dhgroup": "ffdhe3072" 00:21:58.786 } 00:21:58.786 } 00:21:58.786 ]' 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.786 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.046 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:59.046 15:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.614 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.873 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.132 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.132 { 00:22:00.132 "cntlid": 119, 00:22:00.132 "qid": 0, 00:22:00.132 "state": "enabled", 00:22:00.132 "thread": "nvmf_tgt_poll_group_000", 00:22:00.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:00.132 "listen_address": { 00:22:00.132 "trtype": "TCP", 00:22:00.132 "adrfam": "IPv4", 00:22:00.132 "traddr": "10.0.0.2", 00:22:00.132 "trsvcid": "4420" 00:22:00.132 }, 00:22:00.132 "peer_address": { 00:22:00.132 "trtype": "TCP", 00:22:00.132 "adrfam": "IPv4", 00:22:00.132 "traddr": "10.0.0.1", 00:22:00.132 "trsvcid": "54816" 00:22:00.132 }, 00:22:00.132 "auth": { 00:22:00.132 "state": "completed", 00:22:00.132 "digest": "sha512", 00:22:00.132 "dhgroup": "ffdhe3072" 00:22:00.132 } 00:22:00.132 } 00:22:00.132 ]' 00:22:00.132 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.391 15:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.650 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:00.650 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.218 15:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.477 00:22:01.477 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.477 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.477 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.736 { 00:22:01.736 "cntlid": 121, 00:22:01.736 "qid": 0, 00:22:01.736 "state": "enabled", 00:22:01.736 "thread": "nvmf_tgt_poll_group_000", 00:22:01.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:01.736 "listen_address": { 00:22:01.736 "trtype": "TCP", 00:22:01.736 "adrfam": "IPv4", 00:22:01.736 "traddr": "10.0.0.2", 00:22:01.736 "trsvcid": "4420" 00:22:01.736 }, 00:22:01.736 "peer_address": { 00:22:01.736 "trtype": "TCP", 00:22:01.736 "adrfam": "IPv4", 00:22:01.736 "traddr": "10.0.0.1", 00:22:01.736 "trsvcid": "54860" 00:22:01.736 }, 00:22:01.736 "auth": { 00:22:01.736 "state": "completed", 00:22:01.736 "digest": "sha512", 00:22:01.736 "dhgroup": "ffdhe4096" 00:22:01.736 } 00:22:01.736 } 00:22:01.736 ]' 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.736 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.995 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.995 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.995 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.995 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.995 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.254 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:02.254 15:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.822 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.082 00:22:03.082 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.082 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.082 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.341 { 00:22:03.341 "cntlid": 123, 00:22:03.341 "qid": 0, 00:22:03.341 "state": "enabled", 00:22:03.341 "thread": "nvmf_tgt_poll_group_000", 00:22:03.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:03.341 "listen_address": { 00:22:03.341 "trtype": "TCP", 00:22:03.341 "adrfam": "IPv4", 00:22:03.341 "traddr": "10.0.0.2", 00:22:03.341 "trsvcid": "4420" 00:22:03.341 }, 00:22:03.341 "peer_address": { 00:22:03.341 "trtype": "TCP", 00:22:03.341 "adrfam": "IPv4", 00:22:03.341 "traddr": "10.0.0.1", 00:22:03.341 "trsvcid": "55892" 00:22:03.341 }, 00:22:03.341 "auth": { 00:22:03.341 "state": "completed", 00:22:03.341 "digest": "sha512", 00:22:03.341 "dhgroup": "ffdhe4096" 00:22:03.341 } 00:22:03.341 } 00:22:03.341 ]' 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.341 15:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.600 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.600 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.600 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.600 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:03.600 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.169 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.428 15:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.687 00:22:04.687 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.687 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.687 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.946 { 00:22:04.946 "cntlid": 125, 00:22:04.946 "qid": 0, 00:22:04.946 "state": "enabled", 00:22:04.946 "thread": "nvmf_tgt_poll_group_000", 00:22:04.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:04.946 "listen_address": { 00:22:04.946 "trtype": "TCP", 00:22:04.946 "adrfam": "IPv4", 00:22:04.946 "traddr": "10.0.0.2", 00:22:04.946 "trsvcid": "4420" 00:22:04.946 }, 00:22:04.946 "peer_address": { 00:22:04.946 "trtype": "TCP", 00:22:04.946 "adrfam": "IPv4", 00:22:04.946 "traddr": "10.0.0.1", 00:22:04.946 "trsvcid": "55926" 00:22:04.946 }, 00:22:04.946 "auth": { 00:22:04.946 "state": "completed", 00:22:04.946 "digest": "sha512", 00:22:04.946 "dhgroup": "ffdhe4096" 00:22:04.946 } 00:22:04.946 } 00:22:04.946 ]' 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.946 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.206 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.206 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.206 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.206 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:05.206 15:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.775 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.033 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:06.033 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.033 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.033 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.034 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.292 00:22:06.292 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.292 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.292 15:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.551 { 00:22:06.551 "cntlid": 127, 00:22:06.551 "qid": 0, 00:22:06.551 "state": "enabled", 00:22:06.551 "thread": "nvmf_tgt_poll_group_000", 00:22:06.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:06.551 "listen_address": { 00:22:06.551 "trtype": "TCP", 00:22:06.551 "adrfam": "IPv4", 00:22:06.551 "traddr": "10.0.0.2", 00:22:06.551 "trsvcid": "4420" 00:22:06.551 }, 00:22:06.551 "peer_address": { 00:22:06.551 "trtype": "TCP", 00:22:06.551 "adrfam": "IPv4", 00:22:06.551 "traddr": "10.0.0.1", 00:22:06.551 "trsvcid": "55960" 00:22:06.551 }, 00:22:06.551 "auth": { 00:22:06.551 "state": "completed", 00:22:06.551 "digest": "sha512", 00:22:06.551 "dhgroup": "ffdhe4096" 00:22:06.551 } 00:22:06.551 } 00:22:06.551 ]' 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.551 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.809 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:06.809 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.376 15:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.633 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.892 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.151 { 00:22:08.151 "cntlid": 129, 00:22:08.151 "qid": 0, 00:22:08.151 "state": "enabled", 00:22:08.151 "thread": "nvmf_tgt_poll_group_000", 00:22:08.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:08.151 "listen_address": { 00:22:08.151 "trtype": "TCP", 00:22:08.151 "adrfam": "IPv4", 00:22:08.151 "traddr": "10.0.0.2", 00:22:08.151 "trsvcid": "4420" 00:22:08.151 }, 00:22:08.151 "peer_address": { 00:22:08.151 "trtype": "TCP", 00:22:08.151 "adrfam": "IPv4", 00:22:08.151 "traddr": "10.0.0.1", 00:22:08.151 "trsvcid": "55998" 00:22:08.151 }, 00:22:08.151 "auth": { 00:22:08.151 "state": "completed", 00:22:08.151 "digest": "sha512", 00:22:08.151 "dhgroup": "ffdhe6144" 00:22:08.151 } 00:22:08.151 } 00:22:08.151 ]' 00:22:08.151 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.410 15:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.669 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:08.669 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.236 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.495 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.495 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.495 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.495 15:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.753 00:22:09.753 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.754 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.754 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.013 { 00:22:10.013 "cntlid": 131, 00:22:10.013 "qid": 0, 00:22:10.013 "state": "enabled", 00:22:10.013 "thread": "nvmf_tgt_poll_group_000", 00:22:10.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:10.013 "listen_address": { 00:22:10.013 "trtype": "TCP", 00:22:10.013 "adrfam": "IPv4", 00:22:10.013 "traddr": "10.0.0.2", 00:22:10.013 "trsvcid": "4420" 00:22:10.013 }, 00:22:10.013 "peer_address": { 00:22:10.013 "trtype": "TCP", 00:22:10.013 "adrfam": "IPv4", 00:22:10.013 "traddr": "10.0.0.1", 00:22:10.013 "trsvcid": "56012" 00:22:10.013 }, 00:22:10.013 "auth": { 00:22:10.013 "state": "completed", 00:22:10.013 "digest": "sha512", 00:22:10.013 "dhgroup": "ffdhe6144" 00:22:10.013 } 00:22:10.013 } 00:22:10.013 ]' 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.013 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.272 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:10.272 15:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.839 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.840 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.099 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.358 00:22:11.358 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.358 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.358 15:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.617 { 00:22:11.617 "cntlid": 133, 00:22:11.617 "qid": 0, 00:22:11.617 "state": "enabled", 00:22:11.617 "thread": "nvmf_tgt_poll_group_000", 00:22:11.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:11.617 "listen_address": { 00:22:11.617 "trtype": "TCP", 00:22:11.617 "adrfam": "IPv4", 00:22:11.617 "traddr": "10.0.0.2", 00:22:11.617 "trsvcid": "4420" 00:22:11.617 }, 00:22:11.617 "peer_address": { 00:22:11.617 "trtype": "TCP", 00:22:11.617 "adrfam": "IPv4", 00:22:11.617 "traddr": "10.0.0.1", 00:22:11.617 "trsvcid": "56048" 00:22:11.617 }, 00:22:11.617 "auth": { 00:22:11.617 "state": "completed", 00:22:11.617 "digest": "sha512", 00:22:11.617 "dhgroup": "ffdhe6144" 00:22:11.617 } 00:22:11.617 } 00:22:11.617 ]' 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.617 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.876 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:11.876 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.445 15:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.704 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.963 00:22:12.963 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.963 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.963 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.222 { 00:22:13.222 "cntlid": 135, 00:22:13.222 "qid": 0, 00:22:13.222 "state": "enabled", 00:22:13.222 "thread": "nvmf_tgt_poll_group_000", 00:22:13.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:13.222 "listen_address": { 00:22:13.222 "trtype": "TCP", 00:22:13.222 "adrfam": "IPv4", 00:22:13.222 "traddr": "10.0.0.2", 00:22:13.222 "trsvcid": "4420" 00:22:13.222 }, 00:22:13.222 "peer_address": { 00:22:13.222 "trtype": "TCP", 00:22:13.222 "adrfam": "IPv4", 00:22:13.222 "traddr": "10.0.0.1", 00:22:13.222 "trsvcid": "55656" 00:22:13.222 }, 00:22:13.222 "auth": { 00:22:13.222 "state": "completed", 00:22:13.222 "digest": "sha512", 00:22:13.222 "dhgroup": "ffdhe6144" 00:22:13.222 } 00:22:13.222 } 00:22:13.222 ]' 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.222 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.482 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.482 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.482 15:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.482 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:13.482 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.050 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.309 15:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.876 00:22:14.876 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.876 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.876 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.135 { 00:22:15.135 "cntlid": 137, 00:22:15.135 "qid": 0, 00:22:15.135 "state": "enabled", 00:22:15.135 "thread": "nvmf_tgt_poll_group_000", 00:22:15.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:15.135 "listen_address": { 00:22:15.135 "trtype": "TCP", 00:22:15.135 "adrfam": "IPv4", 00:22:15.135 "traddr": "10.0.0.2", 00:22:15.135 "trsvcid": "4420" 00:22:15.135 }, 00:22:15.135 "peer_address": { 00:22:15.135 "trtype": "TCP", 00:22:15.135 "adrfam": "IPv4", 00:22:15.135 "traddr": "10.0.0.1", 00:22:15.135 "trsvcid": "55668" 00:22:15.135 }, 00:22:15.135 "auth": { 00:22:15.135 "state": "completed", 00:22:15.135 "digest": "sha512", 00:22:15.135 "dhgroup": "ffdhe8192" 00:22:15.135 } 00:22:15.135 } 00:22:15.135 ]' 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.135 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.395 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:15.395 15:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.964 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.223 15:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.483 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.742 { 00:22:16.742 "cntlid": 139, 00:22:16.742 "qid": 0, 00:22:16.742 "state": "enabled", 00:22:16.742 "thread": "nvmf_tgt_poll_group_000", 00:22:16.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:16.742 "listen_address": { 00:22:16.742 "trtype": "TCP", 00:22:16.742 "adrfam": "IPv4", 00:22:16.742 "traddr": "10.0.0.2", 00:22:16.742 "trsvcid": "4420" 00:22:16.742 }, 00:22:16.742 "peer_address": { 00:22:16.742 "trtype": "TCP", 00:22:16.742 "adrfam": "IPv4", 00:22:16.742 "traddr": "10.0.0.1", 00:22:16.742 "trsvcid": "55688" 00:22:16.742 }, 00:22:16.742 "auth": { 00:22:16.742 "state": "completed", 00:22:16.742 "digest": "sha512", 00:22:16.742 "dhgroup": "ffdhe8192" 00:22:16.742 } 00:22:16.742 } 00:22:16.742 ]' 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.742 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.002 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.002 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.002 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.002 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.002 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.261 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:17.261 15:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: --dhchap-ctrl-secret DHHC-1:02:MjcxY2YyZTI1OWNjZGM3NDZmYjI2NDg4ZjE5ZGVjNDBlYjMyYWIwNGQyMDcxYWEw0TOoBg==: 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.830 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.831 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.399 00:22:18.399 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.399 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.399 15:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.658 { 00:22:18.658 "cntlid": 141, 00:22:18.658 "qid": 0, 00:22:18.658 "state": "enabled", 00:22:18.658 "thread": "nvmf_tgt_poll_group_000", 00:22:18.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:18.658 "listen_address": { 00:22:18.658 "trtype": "TCP", 00:22:18.658 "adrfam": "IPv4", 00:22:18.658 "traddr": "10.0.0.2", 00:22:18.658 "trsvcid": "4420" 00:22:18.658 }, 00:22:18.658 "peer_address": { 00:22:18.658 "trtype": "TCP", 00:22:18.658 "adrfam": "IPv4", 00:22:18.658 "traddr": "10.0.0.1", 00:22:18.658 "trsvcid": "55722" 00:22:18.658 }, 00:22:18.658 "auth": { 00:22:18.658 "state": "completed", 00:22:18.658 "digest": "sha512", 00:22:18.658 "dhgroup": "ffdhe8192" 00:22:18.658 } 00:22:18.658 } 00:22:18.658 ]' 00:22:18.658 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.659 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.918 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:18.918 15:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:01:ZDM5OGJhZjhiNGUxOWVhOGVhZjY1ZjlkYjU1NjEwN2NErqUW: 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.486 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.746 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.315 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.315 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.574 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.574 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.574 { 00:22:20.574 "cntlid": 143, 00:22:20.574 "qid": 0, 00:22:20.574 "state": "enabled", 00:22:20.574 "thread": "nvmf_tgt_poll_group_000", 00:22:20.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:20.574 "listen_address": { 00:22:20.574 "trtype": "TCP", 00:22:20.574 "adrfam": "IPv4", 00:22:20.574 "traddr": "10.0.0.2", 00:22:20.574 "trsvcid": "4420" 00:22:20.574 }, 00:22:20.574 "peer_address": { 00:22:20.574 "trtype": "TCP", 00:22:20.574 "adrfam": "IPv4", 00:22:20.574 "traddr": "10.0.0.1", 00:22:20.574 "trsvcid": "55750" 00:22:20.574 }, 00:22:20.574 "auth": { 00:22:20.574 "state": "completed", 00:22:20.574 "digest": "sha512", 00:22:20.574 "dhgroup": "ffdhe8192" 00:22:20.574 } 00:22:20.574 } 00:22:20.574 ]' 00:22:20.574 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.574 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.574 15:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.574 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.574 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.574 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.574 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.574 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.833 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:20.833 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.404 15:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.663 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.922 00:22:21.922 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.922 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.922 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.180 { 00:22:22.180 "cntlid": 145, 00:22:22.180 "qid": 0, 00:22:22.180 "state": "enabled", 00:22:22.180 "thread": "nvmf_tgt_poll_group_000", 00:22:22.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:22.180 "listen_address": { 00:22:22.180 "trtype": "TCP", 00:22:22.180 "adrfam": "IPv4", 00:22:22.180 "traddr": "10.0.0.2", 00:22:22.180 "trsvcid": "4420" 00:22:22.180 }, 00:22:22.180 "peer_address": { 00:22:22.180 "trtype": "TCP", 00:22:22.180 "adrfam": "IPv4", 00:22:22.180 "traddr": "10.0.0.1", 00:22:22.180 "trsvcid": "55784" 00:22:22.180 }, 00:22:22.180 "auth": { 00:22:22.180 "state": "completed", 00:22:22.180 "digest": "sha512", 00:22:22.180 "dhgroup": "ffdhe8192" 00:22:22.180 } 00:22:22.180 } 00:22:22.180 ]' 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.180 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.439 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.439 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.439 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.439 15:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.439 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:22.439 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MzUyNWZkYzQ5OWU0MTJjNzk3ODM2MjMyY2RlZjJkOTRhY2IzNzZhZDdhZWYyMGVh9E2Vxw==: --dhchap-ctrl-secret DHHC-1:03:YWVmMzhlMzZhNDk1Yjk1M2FhYzk4ZjhlOGQ0ZjVhMjYyMTI4NjllYTA2MGQ4Y2Y2ZTI4NzJiZDkwMDEzN2MwOY1NnM0=: 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.008 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:23.291 15:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:23.635 request: 00:22:23.635 { 00:22:23.635 "name": "nvme0", 00:22:23.635 "trtype": "tcp", 00:22:23.635 "traddr": "10.0.0.2", 00:22:23.635 "adrfam": "ipv4", 00:22:23.635 "trsvcid": "4420", 00:22:23.635 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:23.635 "prchk_reftag": false, 00:22:23.635 "prchk_guard": false, 00:22:23.635 "hdgst": false, 00:22:23.635 "ddgst": false, 00:22:23.635 "dhchap_key": "key2", 00:22:23.635 "allow_unrecognized_csi": false, 00:22:23.635 "method": "bdev_nvme_attach_controller", 00:22:23.635 "req_id": 1 00:22:23.635 } 00:22:23.635 Got JSON-RPC error response 00:22:23.635 response: 00:22:23.635 { 00:22:23.635 "code": -5, 00:22:23.635 "message": "Input/output error" 00:22:23.635 } 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.635 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:23.918 request: 00:22:23.918 { 00:22:23.918 "name": "nvme0", 00:22:23.918 "trtype": "tcp", 00:22:23.918 "traddr": "10.0.0.2", 00:22:23.918 "adrfam": "ipv4", 00:22:23.918 "trsvcid": "4420", 00:22:23.918 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:23.918 "prchk_reftag": false, 00:22:23.918 "prchk_guard": false, 00:22:23.918 "hdgst": false, 00:22:23.918 "ddgst": false, 00:22:23.918 "dhchap_key": "key1", 00:22:23.918 "dhchap_ctrlr_key": "ckey2", 00:22:23.918 "allow_unrecognized_csi": false, 00:22:23.918 "method": "bdev_nvme_attach_controller", 00:22:23.918 "req_id": 1 00:22:23.918 } 00:22:23.918 Got JSON-RPC error response 00:22:23.918 response: 00:22:23.918 { 00:22:23.918 "code": -5, 00:22:23.918 "message": "Input/output error" 00:22:23.918 } 00:22:24.256 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:24.256 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.256 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.257 15:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.515 request: 00:22:24.515 { 00:22:24.515 "name": "nvme0", 00:22:24.515 "trtype": "tcp", 00:22:24.515 "traddr": "10.0.0.2", 00:22:24.515 "adrfam": "ipv4", 00:22:24.515 "trsvcid": "4420", 00:22:24.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:24.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:24.515 "prchk_reftag": false, 00:22:24.515 "prchk_guard": false, 00:22:24.515 "hdgst": false, 00:22:24.515 "ddgst": false, 00:22:24.515 "dhchap_key": "key1", 00:22:24.515 "dhchap_ctrlr_key": "ckey1", 00:22:24.515 "allow_unrecognized_csi": false, 00:22:24.515 "method": "bdev_nvme_attach_controller", 00:22:24.515 "req_id": 1 00:22:24.515 } 00:22:24.515 Got JSON-RPC error response 00:22:24.515 response: 00:22:24.515 { 00:22:24.515 "code": -5, 00:22:24.515 "message": "Input/output error" 00:22:24.516 } 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3858403 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3858403 ']' 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3858403 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858403 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858403' 00:22:24.516 killing process with pid 3858403 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3858403 00:22:24.516 15:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3858403 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3880049 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3880049 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3880049 ']' 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.894 15:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3880049 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3880049 ']' 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.831 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.089 null0 00:22:27.089 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.089 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:27.089 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hBQ 00:22:27.089 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.089 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.29r ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.29r 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.87V 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.OEq ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OEq 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DXC 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NTB ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NTB 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.M11 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.349 15:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.917 nvme0n1 00:22:27.917 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.917 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.917 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.176 { 00:22:28.176 "cntlid": 1, 00:22:28.176 "qid": 0, 00:22:28.176 "state": "enabled", 00:22:28.176 "thread": "nvmf_tgt_poll_group_000", 00:22:28.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:28.176 "listen_address": { 00:22:28.176 "trtype": "TCP", 00:22:28.176 "adrfam": "IPv4", 00:22:28.176 "traddr": "10.0.0.2", 00:22:28.176 "trsvcid": "4420" 00:22:28.176 }, 00:22:28.176 "peer_address": { 00:22:28.176 "trtype": "TCP", 00:22:28.176 "adrfam": "IPv4", 00:22:28.176 "traddr": "10.0.0.1", 00:22:28.176 "trsvcid": "36444" 00:22:28.176 }, 00:22:28.176 "auth": { 00:22:28.176 "state": "completed", 00:22:28.176 "digest": "sha512", 00:22:28.176 "dhgroup": "ffdhe8192" 00:22:28.176 } 00:22:28.176 } 00:22:28.176 ]' 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.176 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.434 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.434 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.434 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.434 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.434 15:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.694 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:28.694 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.263 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:29.522 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.522 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.522 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.522 15:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.522 request: 00:22:29.522 { 00:22:29.522 "name": "nvme0", 00:22:29.522 "trtype": "tcp", 00:22:29.522 "traddr": "10.0.0.2", 00:22:29.522 "adrfam": "ipv4", 00:22:29.522 "trsvcid": "4420", 00:22:29.522 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:29.522 "prchk_reftag": false, 00:22:29.522 "prchk_guard": false, 00:22:29.522 "hdgst": false, 00:22:29.522 "ddgst": false, 00:22:29.522 "dhchap_key": "key3", 00:22:29.522 "allow_unrecognized_csi": false, 00:22:29.522 "method": "bdev_nvme_attach_controller", 00:22:29.522 "req_id": 1 00:22:29.522 } 00:22:29.522 Got JSON-RPC error response 00:22:29.522 response: 00:22:29.522 { 00:22:29.522 "code": -5, 00:22:29.522 "message": "Input/output error" 00:22:29.522 } 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:29.522 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.781 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.040 request: 00:22:30.040 { 00:22:30.040 "name": "nvme0", 00:22:30.040 "trtype": "tcp", 00:22:30.040 "traddr": "10.0.0.2", 00:22:30.040 "adrfam": "ipv4", 00:22:30.040 "trsvcid": "4420", 00:22:30.040 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:30.040 "prchk_reftag": false, 00:22:30.040 "prchk_guard": false, 00:22:30.040 "hdgst": false, 00:22:30.040 "ddgst": false, 00:22:30.040 "dhchap_key": "key3", 00:22:30.040 "allow_unrecognized_csi": false, 00:22:30.040 "method": "bdev_nvme_attach_controller", 00:22:30.040 "req_id": 1 00:22:30.040 } 00:22:30.040 Got JSON-RPC error response 00:22:30.040 response: 00:22:30.040 { 00:22:30.040 "code": -5, 00:22:30.040 "message": "Input/output error" 00:22:30.040 } 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.040 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.299 15:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.559 request: 00:22:30.559 { 00:22:30.559 "name": "nvme0", 00:22:30.559 "trtype": "tcp", 00:22:30.559 "traddr": "10.0.0.2", 00:22:30.559 "adrfam": "ipv4", 00:22:30.559 "trsvcid": "4420", 00:22:30.559 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:30.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:30.559 "prchk_reftag": false, 00:22:30.559 "prchk_guard": false, 00:22:30.559 "hdgst": false, 00:22:30.559 "ddgst": false, 00:22:30.559 "dhchap_key": "key0", 00:22:30.559 "dhchap_ctrlr_key": "key1", 00:22:30.559 "allow_unrecognized_csi": false, 00:22:30.559 "method": "bdev_nvme_attach_controller", 00:22:30.559 "req_id": 1 00:22:30.559 } 00:22:30.559 Got JSON-RPC error response 00:22:30.559 response: 00:22:30.559 { 00:22:30.559 "code": -5, 00:22:30.559 "message": "Input/output error" 00:22:30.559 } 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:30.559 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:30.818 nvme0n1 00:22:30.818 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:30.818 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.818 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.077 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.336 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.336 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:31.336 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.336 15:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.904 nvme0n1 00:22:31.904 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:31.904 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:31.904 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:32.167 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.427 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.427 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:32.427 15:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: --dhchap-ctrl-secret DHHC-1:03:N2E0NjE5NmYzZmUzNDU0ZTY0YjA4MWJiZTk0YzM5ZTNkYTZhYTkyZmRhNTNmM2MxYTRhNTg3MDQ3MDUzMzMxYfm8+tk=: 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.994 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:33.254 15:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:33.513 request: 00:22:33.513 { 00:22:33.513 "name": "nvme0", 00:22:33.513 "trtype": "tcp", 00:22:33.513 "traddr": "10.0.0.2", 00:22:33.513 "adrfam": "ipv4", 00:22:33.513 "trsvcid": "4420", 00:22:33.513 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:22:33.513 "prchk_reftag": false, 00:22:33.513 "prchk_guard": false, 00:22:33.513 "hdgst": false, 00:22:33.513 "ddgst": false, 00:22:33.513 "dhchap_key": "key1", 00:22:33.513 "allow_unrecognized_csi": false, 00:22:33.513 "method": "bdev_nvme_attach_controller", 00:22:33.513 "req_id": 1 00:22:33.513 } 00:22:33.513 Got JSON-RPC error response 00:22:33.513 response: 00:22:33.513 { 00:22:33.513 "code": -5, 00:22:33.513 "message": "Input/output error" 00:22:33.513 } 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:33.513 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:34.450 nvme0n1 00:22:34.450 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:34.450 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:34.450 15:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.450 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.450 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.450 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:34.709 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:34.968 nvme0n1 00:22:34.968 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:34.968 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:34.968 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.226 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.226 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.226 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: '' 2s 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: ]] 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmUxMGZmNWY4MGU5OTUwODc2Nzk4OTlmMDEzZjRkYzgrdlFf: 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:35.485 15:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: 2s 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: ]] 00:22:37.390 15:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTRlMTBiMDZlZWVmMjlkMDkxMGM0ZTgzNWRkOWM0NWYyOWNiNzMxZjZhZTI2N2M4vjrzBw==: 00:22:37.390 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:37.390 15:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.926 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:40.185 nvme0n1 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.444 15:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.702 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:40.702 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:40.702 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:40.961 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:41.220 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:41.220 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:41.220 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.479 15:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:41.737 request: 00:22:41.737 { 00:22:41.737 "name": "nvme0", 00:22:41.737 "dhchap_key": "key1", 00:22:41.737 "dhchap_ctrlr_key": "key3", 00:22:41.737 "method": "bdev_nvme_set_keys", 00:22:41.737 "req_id": 1 00:22:41.737 } 00:22:41.737 Got JSON-RPC error response 00:22:41.737 response: 00:22:41.737 { 00:22:41.737 "code": -13, 00:22:41.737 "message": "Permission denied" 00:22:41.737 } 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:41.737 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.995 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:41.996 15:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:42.931 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:42.932 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:42.932 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:43.191 15:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.128 nvme0n1 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.128 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:44.387 request: 00:22:44.387 { 00:22:44.387 "name": "nvme0", 00:22:44.387 "dhchap_key": "key2", 00:22:44.387 "dhchap_ctrlr_key": "key0", 00:22:44.387 "method": "bdev_nvme_set_keys", 00:22:44.387 "req_id": 1 00:22:44.387 } 00:22:44.387 Got JSON-RPC error response 00:22:44.387 response: 00:22:44.387 { 00:22:44.387 "code": -13, 00:22:44.387 "message": "Permission denied" 00:22:44.387 } 00:22:44.387 15:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:44.387 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.646 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:44.646 15:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:45.583 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:45.583 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:45.584 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3858641 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3858641 ']' 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3858641 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3858641 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3858641' 00:22:45.842 killing process with pid 3858641 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3858641 00:22:45.842 15:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3858641 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.377 rmmod nvme_tcp 00:22:48.377 rmmod nvme_fabrics 00:22:48.377 rmmod nvme_keyring 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3880049 ']' 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3880049 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3880049 ']' 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3880049 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3880049 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3880049' 00:22:48.377 killing process with pid 3880049 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3880049 00:22:48.377 15:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3880049 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.314 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.573 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.573 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.573 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.573 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.573 15:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hBQ /tmp/spdk.key-sha256.87V /tmp/spdk.key-sha384.DXC /tmp/spdk.key-sha512.M11 /tmp/spdk.key-sha512.29r /tmp/spdk.key-sha384.OEq /tmp/spdk.key-sha256.NTB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:51.480 00:22:51.480 real 2m37.255s 00:22:51.480 user 5m58.762s 00:22:51.480 sys 0m24.147s 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.480 ************************************ 00:22:51.480 END TEST nvmf_auth_target 00:22:51.480 ************************************ 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.480 ************************************ 00:22:51.480 START TEST nvmf_bdevio_no_huge 00:22:51.480 ************************************ 00:22:51.480 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:51.740 * Looking for test storage... 00:22:51.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.740 --rc genhtml_branch_coverage=1 00:22:51.740 --rc genhtml_function_coverage=1 00:22:51.740 --rc genhtml_legend=1 00:22:51.740 --rc geninfo_all_blocks=1 00:22:51.740 --rc geninfo_unexecuted_blocks=1 00:22:51.740 00:22:51.740 ' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.740 --rc genhtml_branch_coverage=1 00:22:51.740 --rc genhtml_function_coverage=1 00:22:51.740 --rc genhtml_legend=1 00:22:51.740 --rc geninfo_all_blocks=1 00:22:51.740 --rc geninfo_unexecuted_blocks=1 00:22:51.740 00:22:51.740 ' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.740 --rc genhtml_branch_coverage=1 00:22:51.740 --rc genhtml_function_coverage=1 00:22:51.740 --rc genhtml_legend=1 00:22:51.740 --rc geninfo_all_blocks=1 00:22:51.740 --rc geninfo_unexecuted_blocks=1 00:22:51.740 00:22:51.740 ' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.740 --rc genhtml_branch_coverage=1 00:22:51.740 --rc genhtml_function_coverage=1 00:22:51.740 --rc genhtml_legend=1 00:22:51.740 --rc geninfo_all_blocks=1 00:22:51.740 --rc geninfo_unexecuted_blocks=1 00:22:51.740 00:22:51.740 ' 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.740 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.741 15:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.312 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.312 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.312 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.313 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.313 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.313 15:27:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:22:58.313 00:22:58.313 --- 10.0.0.2 ping statistics --- 00:22:58.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.313 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:58.313 00:22:58.313 --- 10.0.0.1 ping statistics --- 00:22:58.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.313 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3887958 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3887958 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3887958 ']' 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:58.313 15:27:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.313 [2024-11-06 15:27:25.324772] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:58.313 [2024-11-06 15:27:25.324865] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:58.313 [2024-11-06 15:27:25.471416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.313 [2024-11-06 15:27:25.593957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.313 [2024-11-06 15:27:25.594002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.313 [2024-11-06 15:27:25.594012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.313 [2024-11-06 15:27:25.594023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.313 [2024-11-06 15:27:25.594032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.313 [2024-11-06 15:27:25.596213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.313 [2024-11-06 15:27:25.596290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:58.313 [2024-11-06 15:27:25.596384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.313 [2024-11-06 15:27:25.596405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 [2024-11-06 15:27:26.181712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.573 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.833 Malloc0 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.833 [2024-11-06 15:27:26.272865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.833 { 00:22:58.833 "params": { 00:22:58.833 "name": "Nvme$subsystem", 00:22:58.833 "trtype": "$TEST_TRANSPORT", 00:22:58.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.833 "adrfam": "ipv4", 00:22:58.833 "trsvcid": "$NVMF_PORT", 00:22:58.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.833 "hdgst": ${hdgst:-false}, 00:22:58.833 "ddgst": ${ddgst:-false} 00:22:58.833 }, 00:22:58.833 "method": "bdev_nvme_attach_controller" 00:22:58.833 } 00:22:58.833 EOF 00:22:58.833 )") 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:58.833 15:27:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:58.833 "params": { 00:22:58.833 "name": "Nvme1", 00:22:58.833 "trtype": "tcp", 00:22:58.833 "traddr": "10.0.0.2", 00:22:58.833 "adrfam": "ipv4", 00:22:58.833 "trsvcid": "4420", 00:22:58.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.833 "hdgst": false, 00:22:58.833 "ddgst": false 00:22:58.833 }, 00:22:58.833 "method": "bdev_nvme_attach_controller" 00:22:58.833 }' 00:22:58.833 [2024-11-06 15:27:26.349093] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:58.833 [2024-11-06 15:27:26.349176] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3888207 ] 00:22:59.092 [2024-11-06 15:27:26.487546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.092 [2024-11-06 15:27:26.607628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.092 [2024-11-06 15:27:26.607703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.092 [2024-11-06 15:27:26.607726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.660 I/O targets: 00:22:59.660 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:59.660 00:22:59.660 00:22:59.660 CUnit - A unit testing framework for C - Version 2.1-3 00:22:59.660 http://cunit.sourceforge.net/ 00:22:59.660 00:22:59.660 00:22:59.660 Suite: bdevio tests on: Nvme1n1 00:22:59.660 Test: blockdev write read block ...passed 00:22:59.918 Test: blockdev write zeroes read block ...passed 00:22:59.918 Test: blockdev write zeroes read no split ...passed 00:22:59.918 Test: blockdev write zeroes read split ...passed 00:22:59.918 Test: blockdev write zeroes read split partial ...passed 00:22:59.918 Test: blockdev reset ...[2024-11-06 15:27:27.406586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:59.918 [2024-11-06 15:27:27.406693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032bc00 (9): Bad file descriptor 00:22:59.918 [2024-11-06 15:27:27.423104] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:59.918 passed 00:22:59.918 Test: blockdev write read 8 blocks ...passed 00:22:59.918 Test: blockdev write read size > 128k ...passed 00:22:59.918 Test: blockdev write read invalid size ...passed 00:22:59.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.918 Test: blockdev write read max offset ...passed 00:22:59.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.918 Test: blockdev writev readv 8 blocks ...passed 00:22:59.918 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.235 Test: blockdev writev readv block ...passed 00:23:00.235 Test: blockdev writev readv size > 128k ...passed 00:23:00.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.235 Test: blockdev comparev and writev ...[2024-11-06 15:27:27.638916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.638966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.638986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.639979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.639995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:00.235 [2024-11-06 15:27:27.640006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:00.235 passed 00:23:00.235 Test: blockdev nvme passthru rw ...passed 00:23:00.235 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:27:27.721595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:00.235 [2024-11-06 15:27:27.721627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.721781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:00.235 [2024-11-06 15:27:27.721796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.721913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:00.235 [2024-11-06 15:27:27.721927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:00.235 [2024-11-06 15:27:27.722052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:00.235 [2024-11-06 15:27:27.722067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:00.235 passed 00:23:00.235 Test: blockdev nvme admin passthru ...passed 00:23:00.235 Test: blockdev copy ...passed 00:23:00.235 00:23:00.235 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.235 suites 1 1 n/a 0 0 00:23:00.235 tests 23 23 23 0 0 00:23:00.235 asserts 152 152 152 0 n/a 00:23:00.235 00:23:00.235 Elapsed time = 1.143 seconds 00:23:00.806 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.806 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.806 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.806 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.065 rmmod nvme_tcp 00:23:01.065 rmmod nvme_fabrics 00:23:01.065 rmmod nvme_keyring 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3887958 ']' 00:23:01.065 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3887958 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3887958 ']' 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3887958 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3887958 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3887958' 00:23:01.066 killing process with pid 3887958 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3887958 00:23:01.066 15:27:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3887958 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.003 15:27:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:03.908 00:23:03.908 real 0m12.297s 00:23:03.908 user 0m20.607s 00:23:03.908 sys 0m5.568s 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.908 ************************************ 00:23:03.908 END TEST nvmf_bdevio_no_huge 00:23:03.908 ************************************ 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.908 ************************************ 00:23:03.908 START TEST nvmf_tls 00:23:03.908 ************************************ 00:23:03.908 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:03.908 * Looking for test storage... 00:23:03.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.169 --rc genhtml_branch_coverage=1 00:23:04.169 --rc genhtml_function_coverage=1 00:23:04.169 --rc genhtml_legend=1 00:23:04.169 --rc geninfo_all_blocks=1 00:23:04.169 --rc geninfo_unexecuted_blocks=1 00:23:04.169 00:23:04.169 ' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.169 --rc genhtml_branch_coverage=1 00:23:04.169 --rc genhtml_function_coverage=1 00:23:04.169 --rc genhtml_legend=1 00:23:04.169 --rc geninfo_all_blocks=1 00:23:04.169 --rc geninfo_unexecuted_blocks=1 00:23:04.169 00:23:04.169 ' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.169 --rc genhtml_branch_coverage=1 00:23:04.169 --rc genhtml_function_coverage=1 00:23:04.169 --rc genhtml_legend=1 00:23:04.169 --rc geninfo_all_blocks=1 00:23:04.169 --rc geninfo_unexecuted_blocks=1 00:23:04.169 00:23:04.169 ' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:04.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.169 --rc genhtml_branch_coverage=1 00:23:04.169 --rc genhtml_function_coverage=1 00:23:04.169 --rc genhtml_legend=1 00:23:04.169 --rc geninfo_all_blocks=1 00:23:04.169 --rc geninfo_unexecuted_blocks=1 00:23:04.169 00:23:04.169 ' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.169 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.170 15:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.740 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.740 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.740 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.740 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:10.741 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:10.741 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:10.741 Found net devices under 0000:86:00.0: cvl_0_0 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:10.741 Found net devices under 0000:86:00.1: cvl_0_1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.741 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:23:10.742 00:23:10.742 --- 10.0.0.2 ping statistics --- 00:23:10.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.742 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:23:10.742 00:23:10.742 --- 10.0.0.1 ping statistics --- 00:23:10.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.742 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3892107 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3892107 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3892107 ']' 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:10.742 15:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.742 [2024-11-06 15:27:37.661377] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:10.742 [2024-11-06 15:27:37.661467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.742 [2024-11-06 15:27:37.793736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.742 [2024-11-06 15:27:37.898118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.742 [2024-11-06 15:27:37.898160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.742 [2024-11-06 15:27:37.898170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.742 [2024-11-06 15:27:37.898180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.742 [2024-11-06 15:27:37.898187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.742 [2024-11-06 15:27:37.899698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:11.001 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:11.259 true 00:23:11.259 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.259 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:11.259 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:11.259 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:11.259 15:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:11.518 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:11.518 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.777 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:11.777 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:11.777 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:12.035 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.035 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:12.035 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:12.035 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:12.036 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.036 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:12.294 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:12.294 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:12.294 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:12.553 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.553 15:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:12.553 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:12.553 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:12.553 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:12.812 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:12.812 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:13.071 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aImVXwxuE2 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.rFXc9wcz4O 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aImVXwxuE2 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.rFXc9wcz4O 00:23:13.072 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:13.330 15:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:13.898 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aImVXwxuE2 00:23:13.898 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aImVXwxuE2 00:23:13.898 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:13.898 [2024-11-06 15:27:41.485427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.898 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.157 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.416 [2024-11-06 15:27:41.850341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.416 [2024-11-06 15:27:41.850602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.416 15:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.675 malloc0 00:23:14.675 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:14.675 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aImVXwxuE2 00:23:14.934 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.193 15:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aImVXwxuE2 00:23:25.173 Initializing NVMe Controllers 00:23:25.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:25.173 Initialization complete. Launching workers. 00:23:25.173 ======================================================== 00:23:25.173 Latency(us) 00:23:25.173 Device Information : IOPS MiB/s Average min max 00:23:25.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13005.39 50.80 4921.40 1152.93 7160.88 00:23:25.173 ======================================================== 00:23:25.173 Total : 13005.39 50.80 4921.40 1152.93 7160.88 00:23:25.173 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aImVXwxuE2 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aImVXwxuE2 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3894560 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3894560 /var/tmp/bdevperf.sock 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3894560 ']' 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.432 15:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.432 [2024-11-06 15:27:52.920280] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:25.432 [2024-11-06 15:27:52.920366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894560 ] 00:23:25.432 [2024-11-06 15:27:53.046524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.691 [2024-11-06 15:27:53.157053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.259 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.259 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:26.259 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aImVXwxuE2 00:23:26.521 15:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.521 [2024-11-06 15:27:54.100769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.780 TLSTESTn1 00:23:26.780 15:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:26.780 Running I/O for 10 seconds... 00:23:28.655 4509.00 IOPS, 17.61 MiB/s [2024-11-06T14:27:57.671Z] 4644.00 IOPS, 18.14 MiB/s [2024-11-06T14:27:58.608Z] 4626.00 IOPS, 18.07 MiB/s [2024-11-06T14:27:59.546Z] 4637.75 IOPS, 18.12 MiB/s [2024-11-06T14:28:00.482Z] 4635.80 IOPS, 18.11 MiB/s [2024-11-06T14:28:01.418Z] 4639.83 IOPS, 18.12 MiB/s [2024-11-06T14:28:02.354Z] 4654.14 IOPS, 18.18 MiB/s [2024-11-06T14:28:03.732Z] 4654.12 IOPS, 18.18 MiB/s [2024-11-06T14:28:04.669Z] 4656.44 IOPS, 18.19 MiB/s [2024-11-06T14:28:04.669Z] 4662.60 IOPS, 18.21 MiB/s 00:23:37.031 Latency(us) 00:23:37.031 [2024-11-06T14:28:04.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.031 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.031 Verification LBA range: start 0x0 length 0x2000 00:23:37.031 TLSTESTn1 : 10.02 4666.87 18.23 0.00 0.00 27384.60 5710.99 47934.90 00:23:37.031 [2024-11-06T14:28:04.669Z] =================================================================================================================== 00:23:37.031 [2024-11-06T14:28:04.669Z] Total : 4666.87 18.23 0.00 0.00 27384.60 5710.99 47934.90 00:23:37.031 { 00:23:37.031 "results": [ 00:23:37.031 { 00:23:37.031 "job": "TLSTESTn1", 00:23:37.031 "core_mask": "0x4", 00:23:37.031 "workload": "verify", 00:23:37.031 "status": "finished", 00:23:37.031 "verify_range": { 00:23:37.031 "start": 0, 00:23:37.031 "length": 8192 00:23:37.031 }, 00:23:37.031 "queue_depth": 128, 00:23:37.031 "io_size": 4096, 00:23:37.031 "runtime": 10.017855, 00:23:37.031 "iops": 4666.867308420815, 00:23:37.031 "mibps": 18.229950423518808, 00:23:37.031 "io_failed": 0, 00:23:37.031 "io_timeout": 0, 00:23:37.031 "avg_latency_us": 27384.60427756592, 00:23:37.031 "min_latency_us": 5710.994285714286, 00:23:37.031 "max_latency_us": 47934.90285714286 00:23:37.031 } 00:23:37.031 ], 00:23:37.031 "core_count": 1 00:23:37.031 } 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3894560 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3894560 ']' 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3894560 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3894560 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3894560' 00:23:37.032 killing process with pid 3894560 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3894560 00:23:37.032 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.032 00:23:37.032 Latency(us) 00:23:37.032 [2024-11-06T14:28:04.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.032 [2024-11-06T14:28:04.670Z] =================================================================================================================== 00:23:37.032 [2024-11-06T14:28:04.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.032 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3894560 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFXc9wcz4O 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFXc9wcz4O 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rFXc9wcz4O 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rFXc9wcz4O 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3896620 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3896620 /var/tmp/bdevperf.sock 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3896620 ']' 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.969 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.969 [2024-11-06 15:28:05.353726] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:37.969 [2024-11-06 15:28:05.353809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896620 ] 00:23:37.969 [2024-11-06 15:28:05.472315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.969 [2024-11-06 15:28:05.578457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.537 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.537 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:38.537 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rFXc9wcz4O 00:23:38.796 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.055 [2024-11-06 15:28:06.489127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.055 [2024-11-06 15:28:06.498451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:39.055 [2024-11-06 15:28:06.498971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (107): Transport endpoint is not connected 00:23:39.055 [2024-11-06 15:28:06.499951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:23:39.055 [2024-11-06 15:28:06.500952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:39.055 [2024-11-06 15:28:06.500972] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:39.055 [2024-11-06 15:28:06.500989] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:39.055 [2024-11-06 15:28:06.501003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:39.055 request: 00:23:39.055 { 00:23:39.055 "name": "TLSTEST", 00:23:39.055 "trtype": "tcp", 00:23:39.055 "traddr": "10.0.0.2", 00:23:39.055 "adrfam": "ipv4", 00:23:39.055 "trsvcid": "4420", 00:23:39.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.055 "prchk_reftag": false, 00:23:39.055 "prchk_guard": false, 00:23:39.055 "hdgst": false, 00:23:39.055 "ddgst": false, 00:23:39.055 "psk": "key0", 00:23:39.055 "allow_unrecognized_csi": false, 00:23:39.055 "method": "bdev_nvme_attach_controller", 00:23:39.055 "req_id": 1 00:23:39.055 } 00:23:39.055 Got JSON-RPC error response 00:23:39.055 response: 00:23:39.055 { 00:23:39.055 "code": -5, 00:23:39.055 "message": "Input/output error" 00:23:39.055 } 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3896620 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3896620 ']' 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3896620 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3896620 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3896620' 00:23:39.055 killing process with pid 3896620 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3896620 00:23:39.055 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.055 00:23:39.055 Latency(us) 00:23:39.055 [2024-11-06T14:28:06.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.055 [2024-11-06T14:28:06.693Z] =================================================================================================================== 00:23:39.055 [2024-11-06T14:28:06.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.055 15:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3896620 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aImVXwxuE2 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aImVXwxuE2 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aImVXwxuE2 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aImVXwxuE2 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3896934 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3896934 /var/tmp/bdevperf.sock 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3896934 ']' 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.992 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.993 15:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.993 [2024-11-06 15:28:07.491301] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:39.993 [2024-11-06 15:28:07.491387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896934 ] 00:23:39.993 [2024-11-06 15:28:07.614123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.252 [2024-11-06 15:28:07.720054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.819 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:40.819 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:40.819 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aImVXwxuE2 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:41.078 [2024-11-06 15:28:08.651984] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.078 [2024-11-06 15:28:08.663713] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:41.078 [2024-11-06 15:28:08.663744] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:41.078 [2024-11-06 15:28:08.663778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:41.078 [2024-11-06 15:28:08.664794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (107): Transport endpoint is not connected 00:23:41.078 [2024-11-06 15:28:08.665778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:23:41.078 [2024-11-06 15:28:08.666772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:41.078 [2024-11-06 15:28:08.666795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:41.078 [2024-11-06 15:28:08.666809] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:41.078 [2024-11-06 15:28:08.666829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:41.078 request: 00:23:41.078 { 00:23:41.078 "name": "TLSTEST", 00:23:41.078 "trtype": "tcp", 00:23:41.078 "traddr": "10.0.0.2", 00:23:41.078 "adrfam": "ipv4", 00:23:41.078 "trsvcid": "4420", 00:23:41.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.078 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:41.078 "prchk_reftag": false, 00:23:41.078 "prchk_guard": false, 00:23:41.078 "hdgst": false, 00:23:41.078 "ddgst": false, 00:23:41.078 "psk": "key0", 00:23:41.078 "allow_unrecognized_csi": false, 00:23:41.078 "method": "bdev_nvme_attach_controller", 00:23:41.078 "req_id": 1 00:23:41.078 } 00:23:41.078 Got JSON-RPC error response 00:23:41.078 response: 00:23:41.078 { 00:23:41.078 "code": -5, 00:23:41.078 "message": "Input/output error" 00:23:41.078 } 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3896934 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3896934 ']' 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3896934 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:41.078 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3896934 00:23:41.337 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:41.337 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:41.337 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3896934' 00:23:41.337 killing process with pid 3896934 00:23:41.337 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3896934 00:23:41.337 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.337 00:23:41.337 Latency(us) 00:23:41.337 [2024-11-06T14:28:08.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.337 [2024-11-06T14:28:08.975Z] =================================================================================================================== 00:23:41.337 [2024-11-06T14:28:08.975Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.337 15:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3896934 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aImVXwxuE2 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aImVXwxuE2 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aImVXwxuE2 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aImVXwxuE2 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897328 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897328 /var/tmp/bdevperf.sock 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3897328 ']' 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:42.274 15:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.274 [2024-11-06 15:28:09.658200] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:42.274 [2024-11-06 15:28:09.658290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897328 ] 00:23:42.274 [2024-11-06 15:28:09.772869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.274 [2024-11-06 15:28:09.881251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.841 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:42.841 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:42.841 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aImVXwxuE2 00:23:43.100 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.360 [2024-11-06 15:28:10.802218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.360 [2024-11-06 15:28:10.809636] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:43.360 [2024-11-06 15:28:10.809669] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:43.360 [2024-11-06 15:28:10.809705] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:43.360 [2024-11-06 15:28:10.810014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (107): Transport endpoint is not connected 00:23:43.360 [2024-11-06 15:28:10.810994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:23:43.360 [2024-11-06 15:28:10.811997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:43.360 [2024-11-06 15:28:10.812018] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:43.360 [2024-11-06 15:28:10.812032] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:43.360 [2024-11-06 15:28:10.812046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:43.360 request: 00:23:43.360 { 00:23:43.360 "name": "TLSTEST", 00:23:43.360 "trtype": "tcp", 00:23:43.360 "traddr": "10.0.0.2", 00:23:43.360 "adrfam": "ipv4", 00:23:43.360 "trsvcid": "4420", 00:23:43.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.360 "prchk_reftag": false, 00:23:43.360 "prchk_guard": false, 00:23:43.360 "hdgst": false, 00:23:43.360 "ddgst": false, 00:23:43.360 "psk": "key0", 00:23:43.360 "allow_unrecognized_csi": false, 00:23:43.360 "method": "bdev_nvme_attach_controller", 00:23:43.360 "req_id": 1 00:23:43.360 } 00:23:43.360 Got JSON-RPC error response 00:23:43.360 response: 00:23:43.360 { 00:23:43.360 "code": -5, 00:23:43.360 "message": "Input/output error" 00:23:43.360 } 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897328 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3897328 ']' 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3897328 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3897328 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3897328' 00:23:43.360 killing process with pid 3897328 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3897328 00:23:43.360 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.360 00:23:43.360 Latency(us) 00:23:43.360 [2024-11-06T14:28:10.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.360 [2024-11-06T14:28:10.998Z] =================================================================================================================== 00:23:43.360 [2024-11-06T14:28:10.998Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.360 15:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3897328 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3897665 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3897665 /var/tmp/bdevperf.sock 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3897665 ']' 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:44.297 15:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.297 [2024-11-06 15:28:11.808504] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:44.297 [2024-11-06 15:28:11.808597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897665 ] 00:23:44.556 [2024-11-06 15:28:11.935400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.556 [2024-11-06 15:28:12.045613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.124 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:45.124 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:45.124 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:45.383 [2024-11-06 15:28:12.774625] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:45.383 [2024-11-06 15:28:12.774667] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:45.383 request: 00:23:45.383 { 00:23:45.383 "name": "key0", 00:23:45.383 "path": "", 00:23:45.383 "method": "keyring_file_add_key", 00:23:45.383 "req_id": 1 00:23:45.383 } 00:23:45.383 Got JSON-RPC error response 00:23:45.383 response: 00:23:45.383 { 00:23:45.383 "code": -1, 00:23:45.383 "message": "Operation not permitted" 00:23:45.383 } 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:45.383 [2024-11-06 15:28:12.963240] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.383 [2024-11-06 15:28:12.963285] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:45.383 request: 00:23:45.383 { 00:23:45.383 "name": "TLSTEST", 00:23:45.383 "trtype": "tcp", 00:23:45.383 "traddr": "10.0.0.2", 00:23:45.383 "adrfam": "ipv4", 00:23:45.383 "trsvcid": "4420", 00:23:45.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.383 "prchk_reftag": false, 00:23:45.383 "prchk_guard": false, 00:23:45.383 "hdgst": false, 00:23:45.383 "ddgst": false, 00:23:45.383 "psk": "key0", 00:23:45.383 "allow_unrecognized_csi": false, 00:23:45.383 "method": "bdev_nvme_attach_controller", 00:23:45.383 "req_id": 1 00:23:45.383 } 00:23:45.383 Got JSON-RPC error response 00:23:45.383 response: 00:23:45.383 { 00:23:45.383 "code": -126, 00:23:45.383 "message": "Required key not available" 00:23:45.383 } 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3897665 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3897665 ']' 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3897665 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:45.383 15:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3897665 00:23:45.643 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:45.643 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:45.643 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3897665' 00:23:45.643 killing process with pid 3897665 00:23:45.643 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3897665 00:23:45.643 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.643 00:23:45.643 Latency(us) 00:23:45.643 [2024-11-06T14:28:13.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.643 [2024-11-06T14:28:13.281Z] =================================================================================================================== 00:23:45.643 [2024-11-06T14:28:13.281Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.643 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3897665 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3892107 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3892107 ']' 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3892107 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3892107 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3892107' 00:23:46.580 killing process with pid 3892107 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3892107 00:23:46.580 15:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3892107 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.3NDmuKDv3S 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.3NDmuKDv3S 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3898274 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3898274 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3898274 ']' 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:47.958 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.958 [2024-11-06 15:28:15.314796] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:47.958 [2024-11-06 15:28:15.314889] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.958 [2024-11-06 15:28:15.443683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.958 [2024-11-06 15:28:15.545076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.958 [2024-11-06 15:28:15.545122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.958 [2024-11-06 15:28:15.545133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.958 [2024-11-06 15:28:15.545144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.958 [2024-11-06 15:28:15.545151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.958 [2024-11-06 15:28:15.546614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3NDmuKDv3S 00:23:48.525 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.783 [2024-11-06 15:28:16.322243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.783 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:49.042 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:49.301 [2024-11-06 15:28:16.715240] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:49.301 [2024-11-06 15:28:16.715457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.301 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:49.559 malloc0 00:23:49.559 15:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:49.559 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:23:49.818 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3NDmuKDv3S 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3NDmuKDv3S 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3898697 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3898697 /var/tmp/bdevperf.sock 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3898697 ']' 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:50.078 15:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.078 [2024-11-06 15:28:17.624995] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:50.078 [2024-11-06 15:28:17.625088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898697 ] 00:23:50.337 [2024-11-06 15:28:17.752170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.337 [2024-11-06 15:28:17.853554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.905 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.905 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:50.905 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:23:51.163 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.163 [2024-11-06 15:28:18.798430] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.422 TLSTESTn1 00:23:51.422 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:51.422 Running I/O for 10 seconds... 00:23:53.364 4456.00 IOPS, 17.41 MiB/s [2024-11-06T14:28:22.378Z] 4522.00 IOPS, 17.66 MiB/s [2024-11-06T14:28:23.314Z] 4516.67 IOPS, 17.64 MiB/s [2024-11-06T14:28:24.250Z] 4544.00 IOPS, 17.75 MiB/s [2024-11-06T14:28:25.226Z] 4566.40 IOPS, 17.84 MiB/s [2024-11-06T14:28:26.187Z] 4568.50 IOPS, 17.85 MiB/s [2024-11-06T14:28:27.122Z] 4517.29 IOPS, 17.65 MiB/s [2024-11-06T14:28:28.058Z] 4531.62 IOPS, 17.70 MiB/s [2024-11-06T14:28:29.435Z] 4551.44 IOPS, 17.78 MiB/s [2024-11-06T14:28:29.435Z] 4571.30 IOPS, 17.86 MiB/s 00:24:01.797 Latency(us) 00:24:01.797 [2024-11-06T14:28:29.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.797 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.797 Verification LBA range: start 0x0 length 0x2000 00:24:01.797 TLSTESTn1 : 10.02 4573.93 17.87 0.00 0.00 27939.14 7770.70 26588.89 00:24:01.797 [2024-11-06T14:28:29.435Z] =================================================================================================================== 00:24:01.797 [2024-11-06T14:28:29.435Z] Total : 4573.93 17.87 0.00 0.00 27939.14 7770.70 26588.89 00:24:01.797 { 00:24:01.797 "results": [ 00:24:01.797 { 00:24:01.797 "job": "TLSTESTn1", 00:24:01.797 "core_mask": "0x4", 00:24:01.797 "workload": "verify", 00:24:01.797 "status": "finished", 00:24:01.797 "verify_range": { 00:24:01.797 "start": 0, 00:24:01.797 "length": 8192 00:24:01.797 }, 00:24:01.797 "queue_depth": 128, 00:24:01.797 "io_size": 4096, 00:24:01.797 "runtime": 10.021803, 00:24:01.797 "iops": 4573.927465945998, 00:24:01.797 "mibps": 17.866904163851554, 00:24:01.797 "io_failed": 0, 00:24:01.797 "io_timeout": 0, 00:24:01.797 "avg_latency_us": 27939.13989929557, 00:24:01.797 "min_latency_us": 7770.697142857143, 00:24:01.797 "max_latency_us": 26588.891428571427 00:24:01.797 } 00:24:01.797 ], 00:24:01.797 "core_count": 1 00:24:01.797 } 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3898697 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3898697 ']' 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3898697 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3898697 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3898697' 00:24:01.797 killing process with pid 3898697 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3898697 00:24:01.797 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.797 00:24:01.797 Latency(us) 00:24:01.797 [2024-11-06T14:28:29.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.797 [2024-11-06T14:28:29.435Z] =================================================================================================================== 00:24:01.797 [2024-11-06T14:28:29.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.797 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3898697 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.3NDmuKDv3S 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3NDmuKDv3S 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3NDmuKDv3S 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3NDmuKDv3S 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3NDmuKDv3S 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3900614 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3900614 /var/tmp/bdevperf.sock 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3900614 ']' 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:02.732 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.732 [2024-11-06 15:28:30.089943] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:02.732 [2024-11-06 15:28:30.090036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900614 ] 00:24:02.732 [2024-11-06 15:28:30.217644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.733 [2024-11-06 15:28:30.325151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.300 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.300 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:03.300 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:03.559 [2024-11-06 15:28:31.066472] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3NDmuKDv3S': 0100666 00:24:03.559 [2024-11-06 15:28:31.066510] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:03.559 request: 00:24:03.559 { 00:24:03.559 "name": "key0", 00:24:03.559 "path": "/tmp/tmp.3NDmuKDv3S", 00:24:03.559 "method": "keyring_file_add_key", 00:24:03.559 "req_id": 1 00:24:03.559 } 00:24:03.559 Got JSON-RPC error response 00:24:03.559 response: 00:24:03.559 { 00:24:03.559 "code": -1, 00:24:03.559 "message": "Operation not permitted" 00:24:03.559 } 00:24:03.559 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.818 [2024-11-06 15:28:31.259086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.818 [2024-11-06 15:28:31.259133] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:03.818 request: 00:24:03.818 { 00:24:03.818 "name": "TLSTEST", 00:24:03.818 "trtype": "tcp", 00:24:03.818 "traddr": "10.0.0.2", 00:24:03.818 "adrfam": "ipv4", 00:24:03.818 "trsvcid": "4420", 00:24:03.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.818 "prchk_reftag": false, 00:24:03.818 "prchk_guard": false, 00:24:03.818 "hdgst": false, 00:24:03.818 "ddgst": false, 00:24:03.818 "psk": "key0", 00:24:03.818 "allow_unrecognized_csi": false, 00:24:03.818 "method": "bdev_nvme_attach_controller", 00:24:03.818 "req_id": 1 00:24:03.818 } 00:24:03.818 Got JSON-RPC error response 00:24:03.818 response: 00:24:03.818 { 00:24:03.818 "code": -126, 00:24:03.818 "message": "Required key not available" 00:24:03.818 } 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3900614 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3900614 ']' 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3900614 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3900614 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3900614' 00:24:03.818 killing process with pid 3900614 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3900614 00:24:03.818 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.818 00:24:03.818 Latency(us) 00:24:03.818 [2024-11-06T14:28:31.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.818 [2024-11-06T14:28:31.456Z] =================================================================================================================== 00:24:03.818 [2024-11-06T14:28:31.456Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.818 15:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3900614 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3898274 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3898274 ']' 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3898274 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3898274 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3898274' 00:24:04.753 killing process with pid 3898274 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3898274 00:24:04.753 15:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3898274 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3901317 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3901317 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3901317 ']' 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.130 15:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.130 [2024-11-06 15:28:33.545681] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:06.130 [2024-11-06 15:28:33.545783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.130 [2024-11-06 15:28:33.675567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.391 [2024-11-06 15:28:33.785130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.391 [2024-11-06 15:28:33.785175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.391 [2024-11-06 15:28:33.785186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.391 [2024-11-06 15:28:33.785198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.391 [2024-11-06 15:28:33.785211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.391 [2024-11-06 15:28:33.786632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3NDmuKDv3S 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.959 [2024-11-06 15:28:34.561839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.959 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:07.218 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:07.477 [2024-11-06 15:28:34.962847] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.477 [2024-11-06 15:28:34.963081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.477 15:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:07.736 malloc0 00:24:07.736 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:07.995 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:07.995 [2024-11-06 15:28:35.574469] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3NDmuKDv3S': 0100666 00:24:07.995 [2024-11-06 15:28:35.574506] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:07.995 request: 00:24:07.995 { 00:24:07.995 "name": "key0", 00:24:07.995 "path": "/tmp/tmp.3NDmuKDv3S", 00:24:07.995 "method": "keyring_file_add_key", 00:24:07.995 "req_id": 1 00:24:07.995 } 00:24:07.995 Got JSON-RPC error response 00:24:07.995 response: 00:24:07.995 { 00:24:07.995 "code": -1, 00:24:07.995 "message": "Operation not permitted" 00:24:07.995 } 00:24:07.995 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.254 [2024-11-06 15:28:35.771020] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:08.254 [2024-11-06 15:28:35.771074] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:08.254 request: 00:24:08.254 { 00:24:08.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.254 "host": "nqn.2016-06.io.spdk:host1", 00:24:08.254 "psk": "key0", 00:24:08.254 "method": "nvmf_subsystem_add_host", 00:24:08.254 "req_id": 1 00:24:08.254 } 00:24:08.254 Got JSON-RPC error response 00:24:08.254 response: 00:24:08.254 { 00:24:08.254 "code": -32603, 00:24:08.254 "message": "Internal error" 00:24:08.254 } 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3901317 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3901317 ']' 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3901317 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901317 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901317' 00:24:08.254 killing process with pid 3901317 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3901317 00:24:08.254 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3901317 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.3NDmuKDv3S 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3901821 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3901821 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3901821 ']' 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:09.632 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.633 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:09.633 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.633 [2024-11-06 15:28:37.135380] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:09.633 [2024-11-06 15:28:37.135465] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.633 [2024-11-06 15:28:37.263168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.892 [2024-11-06 15:28:37.366857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.892 [2024-11-06 15:28:37.366907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.892 [2024-11-06 15:28:37.366918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.892 [2024-11-06 15:28:37.366929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.892 [2024-11-06 15:28:37.366936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.892 [2024-11-06 15:28:37.368569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:24:10.460 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3NDmuKDv3S 00:24:10.461 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.720 [2024-11-06 15:28:38.135124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.720 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.720 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:10.979 [2024-11-06 15:28:38.496071] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.979 [2024-11-06 15:28:38.496326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.979 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:11.238 malloc0 00:24:11.238 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:11.497 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:11.756 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3902296 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3902296 /var/tmp/bdevperf.sock 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3902296 ']' 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:11.757 15:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.016 [2024-11-06 15:28:39.418912] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:12.016 [2024-11-06 15:28:39.419019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902296 ] 00:24:12.016 [2024-11-06 15:28:39.544325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.016 [2024-11-06 15:28:39.651505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.953 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.953 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:12.953 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:12.953 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.212 [2024-11-06 15:28:40.622146] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.212 TLSTESTn1 00:24:13.212 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:13.471 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:13.471 "subsystems": [ 00:24:13.471 { 00:24:13.471 "subsystem": "keyring", 00:24:13.471 "config": [ 00:24:13.471 { 00:24:13.471 "method": "keyring_file_add_key", 00:24:13.471 "params": { 00:24:13.471 "name": "key0", 00:24:13.471 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:13.471 } 00:24:13.471 } 00:24:13.471 ] 00:24:13.471 }, 00:24:13.471 { 00:24:13.471 "subsystem": "iobuf", 00:24:13.471 "config": [ 00:24:13.471 { 00:24:13.471 "method": "iobuf_set_options", 00:24:13.471 "params": { 00:24:13.471 "small_pool_count": 8192, 00:24:13.472 "large_pool_count": 1024, 00:24:13.472 "small_bufsize": 8192, 00:24:13.472 "large_bufsize": 135168, 00:24:13.472 "enable_numa": false 00:24:13.472 } 00:24:13.472 } 00:24:13.472 ] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "sock", 00:24:13.472 "config": [ 00:24:13.472 { 00:24:13.472 "method": "sock_set_default_impl", 00:24:13.472 "params": { 00:24:13.472 "impl_name": "posix" 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "sock_impl_set_options", 00:24:13.472 "params": { 00:24:13.472 "impl_name": "ssl", 00:24:13.472 "recv_buf_size": 4096, 00:24:13.472 "send_buf_size": 4096, 00:24:13.472 "enable_recv_pipe": true, 00:24:13.472 "enable_quickack": false, 00:24:13.472 "enable_placement_id": 0, 00:24:13.472 "enable_zerocopy_send_server": true, 00:24:13.472 "enable_zerocopy_send_client": false, 00:24:13.472 "zerocopy_threshold": 0, 00:24:13.472 "tls_version": 0, 00:24:13.472 "enable_ktls": false 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "sock_impl_set_options", 00:24:13.472 "params": { 00:24:13.472 "impl_name": "posix", 00:24:13.472 "recv_buf_size": 2097152, 00:24:13.472 "send_buf_size": 2097152, 00:24:13.472 "enable_recv_pipe": true, 00:24:13.472 "enable_quickack": false, 00:24:13.472 "enable_placement_id": 0, 00:24:13.472 "enable_zerocopy_send_server": true, 00:24:13.472 "enable_zerocopy_send_client": false, 00:24:13.472 "zerocopy_threshold": 0, 00:24:13.472 "tls_version": 0, 00:24:13.472 "enable_ktls": false 00:24:13.472 } 00:24:13.472 } 00:24:13.472 ] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "vmd", 00:24:13.472 "config": [] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "accel", 00:24:13.472 "config": [ 00:24:13.472 { 00:24:13.472 "method": "accel_set_options", 00:24:13.472 "params": { 00:24:13.472 "small_cache_size": 128, 00:24:13.472 "large_cache_size": 16, 00:24:13.472 "task_count": 2048, 00:24:13.472 "sequence_count": 2048, 00:24:13.472 "buf_count": 2048 00:24:13.472 } 00:24:13.472 } 00:24:13.472 ] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "bdev", 00:24:13.472 "config": [ 00:24:13.472 { 00:24:13.472 "method": "bdev_set_options", 00:24:13.472 "params": { 00:24:13.472 "bdev_io_pool_size": 65535, 00:24:13.472 "bdev_io_cache_size": 256, 00:24:13.472 "bdev_auto_examine": true, 00:24:13.472 "iobuf_small_cache_size": 128, 00:24:13.472 "iobuf_large_cache_size": 16 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_raid_set_options", 00:24:13.472 "params": { 00:24:13.472 "process_window_size_kb": 1024, 00:24:13.472 "process_max_bandwidth_mb_sec": 0 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_iscsi_set_options", 00:24:13.472 "params": { 00:24:13.472 "timeout_sec": 30 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_nvme_set_options", 00:24:13.472 "params": { 00:24:13.472 "action_on_timeout": "none", 00:24:13.472 "timeout_us": 0, 00:24:13.472 "timeout_admin_us": 0, 00:24:13.472 "keep_alive_timeout_ms": 10000, 00:24:13.472 "arbitration_burst": 0, 00:24:13.472 "low_priority_weight": 0, 00:24:13.472 "medium_priority_weight": 0, 00:24:13.472 "high_priority_weight": 0, 00:24:13.472 "nvme_adminq_poll_period_us": 10000, 00:24:13.472 "nvme_ioq_poll_period_us": 0, 00:24:13.472 "io_queue_requests": 0, 00:24:13.472 "delay_cmd_submit": true, 00:24:13.472 "transport_retry_count": 4, 00:24:13.472 "bdev_retry_count": 3, 00:24:13.472 "transport_ack_timeout": 0, 00:24:13.472 "ctrlr_loss_timeout_sec": 0, 00:24:13.472 "reconnect_delay_sec": 0, 00:24:13.472 "fast_io_fail_timeout_sec": 0, 00:24:13.472 "disable_auto_failback": false, 00:24:13.472 "generate_uuids": false, 00:24:13.472 "transport_tos": 0, 00:24:13.472 "nvme_error_stat": false, 00:24:13.472 "rdma_srq_size": 0, 00:24:13.472 "io_path_stat": false, 00:24:13.472 "allow_accel_sequence": false, 00:24:13.472 "rdma_max_cq_size": 0, 00:24:13.472 "rdma_cm_event_timeout_ms": 0, 00:24:13.472 "dhchap_digests": [ 00:24:13.472 "sha256", 00:24:13.472 "sha384", 00:24:13.472 "sha512" 00:24:13.472 ], 00:24:13.472 "dhchap_dhgroups": [ 00:24:13.472 "null", 00:24:13.472 "ffdhe2048", 00:24:13.472 "ffdhe3072", 00:24:13.472 "ffdhe4096", 00:24:13.472 "ffdhe6144", 00:24:13.472 "ffdhe8192" 00:24:13.472 ] 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_nvme_set_hotplug", 00:24:13.472 "params": { 00:24:13.472 "period_us": 100000, 00:24:13.472 "enable": false 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_malloc_create", 00:24:13.472 "params": { 00:24:13.472 "name": "malloc0", 00:24:13.472 "num_blocks": 8192, 00:24:13.472 "block_size": 4096, 00:24:13.472 "physical_block_size": 4096, 00:24:13.472 "uuid": "c294827c-bea9-4de4-a358-63009baa73f6", 00:24:13.472 "optimal_io_boundary": 0, 00:24:13.472 "md_size": 0, 00:24:13.472 "dif_type": 0, 00:24:13.472 "dif_is_head_of_md": false, 00:24:13.472 "dif_pi_format": 0 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "bdev_wait_for_examine" 00:24:13.472 } 00:24:13.472 ] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "nbd", 00:24:13.472 "config": [] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "scheduler", 00:24:13.472 "config": [ 00:24:13.472 { 00:24:13.472 "method": "framework_set_scheduler", 00:24:13.472 "params": { 00:24:13.472 "name": "static" 00:24:13.472 } 00:24:13.472 } 00:24:13.472 ] 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "subsystem": "nvmf", 00:24:13.472 "config": [ 00:24:13.472 { 00:24:13.472 "method": "nvmf_set_config", 00:24:13.472 "params": { 00:24:13.472 "discovery_filter": "match_any", 00:24:13.472 "admin_cmd_passthru": { 00:24:13.472 "identify_ctrlr": false 00:24:13.472 }, 00:24:13.472 "dhchap_digests": [ 00:24:13.472 "sha256", 00:24:13.472 "sha384", 00:24:13.472 "sha512" 00:24:13.472 ], 00:24:13.472 "dhchap_dhgroups": [ 00:24:13.472 "null", 00:24:13.472 "ffdhe2048", 00:24:13.472 "ffdhe3072", 00:24:13.472 "ffdhe4096", 00:24:13.472 "ffdhe6144", 00:24:13.472 "ffdhe8192" 00:24:13.472 ] 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "nvmf_set_max_subsystems", 00:24:13.472 "params": { 00:24:13.472 "max_subsystems": 1024 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "nvmf_set_crdt", 00:24:13.472 "params": { 00:24:13.472 "crdt1": 0, 00:24:13.472 "crdt2": 0, 00:24:13.472 "crdt3": 0 00:24:13.472 } 00:24:13.472 }, 00:24:13.472 { 00:24:13.472 "method": "nvmf_create_transport", 00:24:13.472 "params": { 00:24:13.472 "trtype": "TCP", 00:24:13.472 "max_queue_depth": 128, 00:24:13.472 "max_io_qpairs_per_ctrlr": 127, 00:24:13.472 "in_capsule_data_size": 4096, 00:24:13.472 "max_io_size": 131072, 00:24:13.472 "io_unit_size": 131072, 00:24:13.472 "max_aq_depth": 128, 00:24:13.472 "num_shared_buffers": 511, 00:24:13.472 "buf_cache_size": 4294967295, 00:24:13.472 "dif_insert_or_strip": false, 00:24:13.472 "zcopy": false, 00:24:13.472 "c2h_success": false, 00:24:13.472 "sock_priority": 0, 00:24:13.472 "abort_timeout_sec": 1, 00:24:13.472 "ack_timeout": 0, 00:24:13.472 "data_wr_pool_size": 0 00:24:13.472 } 00:24:13.473 }, 00:24:13.473 { 00:24:13.473 "method": "nvmf_create_subsystem", 00:24:13.473 "params": { 00:24:13.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.473 "allow_any_host": false, 00:24:13.473 "serial_number": "SPDK00000000000001", 00:24:13.473 "model_number": "SPDK bdev Controller", 00:24:13.473 "max_namespaces": 10, 00:24:13.473 "min_cntlid": 1, 00:24:13.473 "max_cntlid": 65519, 00:24:13.473 "ana_reporting": false 00:24:13.473 } 00:24:13.473 }, 00:24:13.473 { 00:24:13.473 "method": "nvmf_subsystem_add_host", 00:24:13.473 "params": { 00:24:13.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.473 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.473 "psk": "key0" 00:24:13.473 } 00:24:13.473 }, 00:24:13.473 { 00:24:13.473 "method": "nvmf_subsystem_add_ns", 00:24:13.473 "params": { 00:24:13.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.473 "namespace": { 00:24:13.473 "nsid": 1, 00:24:13.473 "bdev_name": "malloc0", 00:24:13.473 "nguid": "C294827CBEA94DE4A35863009BAA73F6", 00:24:13.473 "uuid": "c294827c-bea9-4de4-a358-63009baa73f6", 00:24:13.473 "no_auto_visible": false 00:24:13.473 } 00:24:13.473 } 00:24:13.473 }, 00:24:13.473 { 00:24:13.473 "method": "nvmf_subsystem_add_listener", 00:24:13.473 "params": { 00:24:13.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.473 "listen_address": { 00:24:13.473 "trtype": "TCP", 00:24:13.473 "adrfam": "IPv4", 00:24:13.473 "traddr": "10.0.0.2", 00:24:13.473 "trsvcid": "4420" 00:24:13.473 }, 00:24:13.473 "secure_channel": true 00:24:13.473 } 00:24:13.473 } 00:24:13.473 ] 00:24:13.473 } 00:24:13.473 ] 00:24:13.473 }' 00:24:13.473 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:13.732 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:13.732 "subsystems": [ 00:24:13.732 { 00:24:13.732 "subsystem": "keyring", 00:24:13.732 "config": [ 00:24:13.732 { 00:24:13.732 "method": "keyring_file_add_key", 00:24:13.732 "params": { 00:24:13.732 "name": "key0", 00:24:13.732 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:13.732 } 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "iobuf", 00:24:13.733 "config": [ 00:24:13.733 { 00:24:13.733 "method": "iobuf_set_options", 00:24:13.733 "params": { 00:24:13.733 "small_pool_count": 8192, 00:24:13.733 "large_pool_count": 1024, 00:24:13.733 "small_bufsize": 8192, 00:24:13.733 "large_bufsize": 135168, 00:24:13.733 "enable_numa": false 00:24:13.733 } 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "sock", 00:24:13.733 "config": [ 00:24:13.733 { 00:24:13.733 "method": "sock_set_default_impl", 00:24:13.733 "params": { 00:24:13.733 "impl_name": "posix" 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "sock_impl_set_options", 00:24:13.733 "params": { 00:24:13.733 "impl_name": "ssl", 00:24:13.733 "recv_buf_size": 4096, 00:24:13.733 "send_buf_size": 4096, 00:24:13.733 "enable_recv_pipe": true, 00:24:13.733 "enable_quickack": false, 00:24:13.733 "enable_placement_id": 0, 00:24:13.733 "enable_zerocopy_send_server": true, 00:24:13.733 "enable_zerocopy_send_client": false, 00:24:13.733 "zerocopy_threshold": 0, 00:24:13.733 "tls_version": 0, 00:24:13.733 "enable_ktls": false 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "sock_impl_set_options", 00:24:13.733 "params": { 00:24:13.733 "impl_name": "posix", 00:24:13.733 "recv_buf_size": 2097152, 00:24:13.733 "send_buf_size": 2097152, 00:24:13.733 "enable_recv_pipe": true, 00:24:13.733 "enable_quickack": false, 00:24:13.733 "enable_placement_id": 0, 00:24:13.733 "enable_zerocopy_send_server": true, 00:24:13.733 "enable_zerocopy_send_client": false, 00:24:13.733 "zerocopy_threshold": 0, 00:24:13.733 "tls_version": 0, 00:24:13.733 "enable_ktls": false 00:24:13.733 } 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "vmd", 00:24:13.733 "config": [] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "accel", 00:24:13.733 "config": [ 00:24:13.733 { 00:24:13.733 "method": "accel_set_options", 00:24:13.733 "params": { 00:24:13.733 "small_cache_size": 128, 00:24:13.733 "large_cache_size": 16, 00:24:13.733 "task_count": 2048, 00:24:13.733 "sequence_count": 2048, 00:24:13.733 "buf_count": 2048 00:24:13.733 } 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "bdev", 00:24:13.733 "config": [ 00:24:13.733 { 00:24:13.733 "method": "bdev_set_options", 00:24:13.733 "params": { 00:24:13.733 "bdev_io_pool_size": 65535, 00:24:13.733 "bdev_io_cache_size": 256, 00:24:13.733 "bdev_auto_examine": true, 00:24:13.733 "iobuf_small_cache_size": 128, 00:24:13.733 "iobuf_large_cache_size": 16 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_raid_set_options", 00:24:13.733 "params": { 00:24:13.733 "process_window_size_kb": 1024, 00:24:13.733 "process_max_bandwidth_mb_sec": 0 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_iscsi_set_options", 00:24:13.733 "params": { 00:24:13.733 "timeout_sec": 30 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_nvme_set_options", 00:24:13.733 "params": { 00:24:13.733 "action_on_timeout": "none", 00:24:13.733 "timeout_us": 0, 00:24:13.733 "timeout_admin_us": 0, 00:24:13.733 "keep_alive_timeout_ms": 10000, 00:24:13.733 "arbitration_burst": 0, 00:24:13.733 "low_priority_weight": 0, 00:24:13.733 "medium_priority_weight": 0, 00:24:13.733 "high_priority_weight": 0, 00:24:13.733 "nvme_adminq_poll_period_us": 10000, 00:24:13.733 "nvme_ioq_poll_period_us": 0, 00:24:13.733 "io_queue_requests": 512, 00:24:13.733 "delay_cmd_submit": true, 00:24:13.733 "transport_retry_count": 4, 00:24:13.733 "bdev_retry_count": 3, 00:24:13.733 "transport_ack_timeout": 0, 00:24:13.733 "ctrlr_loss_timeout_sec": 0, 00:24:13.733 "reconnect_delay_sec": 0, 00:24:13.733 "fast_io_fail_timeout_sec": 0, 00:24:13.733 "disable_auto_failback": false, 00:24:13.733 "generate_uuids": false, 00:24:13.733 "transport_tos": 0, 00:24:13.733 "nvme_error_stat": false, 00:24:13.733 "rdma_srq_size": 0, 00:24:13.733 "io_path_stat": false, 00:24:13.733 "allow_accel_sequence": false, 00:24:13.733 "rdma_max_cq_size": 0, 00:24:13.733 "rdma_cm_event_timeout_ms": 0, 00:24:13.733 "dhchap_digests": [ 00:24:13.733 "sha256", 00:24:13.733 "sha384", 00:24:13.733 "sha512" 00:24:13.733 ], 00:24:13.733 "dhchap_dhgroups": [ 00:24:13.733 "null", 00:24:13.733 "ffdhe2048", 00:24:13.733 "ffdhe3072", 00:24:13.733 "ffdhe4096", 00:24:13.733 "ffdhe6144", 00:24:13.733 "ffdhe8192" 00:24:13.733 ] 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_nvme_attach_controller", 00:24:13.733 "params": { 00:24:13.733 "name": "TLSTEST", 00:24:13.733 "trtype": "TCP", 00:24:13.733 "adrfam": "IPv4", 00:24:13.733 "traddr": "10.0.0.2", 00:24:13.733 "trsvcid": "4420", 00:24:13.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.733 "prchk_reftag": false, 00:24:13.733 "prchk_guard": false, 00:24:13.733 "ctrlr_loss_timeout_sec": 0, 00:24:13.733 "reconnect_delay_sec": 0, 00:24:13.733 "fast_io_fail_timeout_sec": 0, 00:24:13.733 "psk": "key0", 00:24:13.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.733 "hdgst": false, 00:24:13.733 "ddgst": false, 00:24:13.733 "multipath": "multipath" 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_nvme_set_hotplug", 00:24:13.733 "params": { 00:24:13.733 "period_us": 100000, 00:24:13.733 "enable": false 00:24:13.733 } 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "method": "bdev_wait_for_examine" 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }, 00:24:13.733 { 00:24:13.733 "subsystem": "nbd", 00:24:13.733 "config": [] 00:24:13.733 } 00:24:13.733 ] 00:24:13.733 }' 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3902296 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3902296 ']' 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3902296 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3902296 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3902296' 00:24:13.733 killing process with pid 3902296 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3902296 00:24:13.733 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.733 00:24:13.733 Latency(us) 00:24:13.733 [2024-11-06T14:28:41.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.733 [2024-11-06T14:28:41.371Z] =================================================================================================================== 00:24:13.733 [2024-11-06T14:28:41.371Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:13.733 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3902296 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3901821 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3901821 ']' 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3901821 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3901821 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3901821' 00:24:14.671 killing process with pid 3901821 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3901821 00:24:14.671 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3901821 00:24:16.051 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:16.051 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.051 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.051 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:16.051 "subsystems": [ 00:24:16.051 { 00:24:16.051 "subsystem": "keyring", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "keyring_file_add_key", 00:24:16.051 "params": { 00:24:16.051 "name": "key0", 00:24:16.051 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:16.051 } 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "iobuf", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "iobuf_set_options", 00:24:16.051 "params": { 00:24:16.051 "small_pool_count": 8192, 00:24:16.051 "large_pool_count": 1024, 00:24:16.051 "small_bufsize": 8192, 00:24:16.051 "large_bufsize": 135168, 00:24:16.051 "enable_numa": false 00:24:16.051 } 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "sock", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "sock_set_default_impl", 00:24:16.051 "params": { 00:24:16.051 "impl_name": "posix" 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "sock_impl_set_options", 00:24:16.051 "params": { 00:24:16.051 "impl_name": "ssl", 00:24:16.051 "recv_buf_size": 4096, 00:24:16.051 "send_buf_size": 4096, 00:24:16.051 "enable_recv_pipe": true, 00:24:16.051 "enable_quickack": false, 00:24:16.051 "enable_placement_id": 0, 00:24:16.051 "enable_zerocopy_send_server": true, 00:24:16.051 "enable_zerocopy_send_client": false, 00:24:16.051 "zerocopy_threshold": 0, 00:24:16.051 "tls_version": 0, 00:24:16.051 "enable_ktls": false 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "sock_impl_set_options", 00:24:16.051 "params": { 00:24:16.051 "impl_name": "posix", 00:24:16.051 "recv_buf_size": 2097152, 00:24:16.051 "send_buf_size": 2097152, 00:24:16.051 "enable_recv_pipe": true, 00:24:16.051 "enable_quickack": false, 00:24:16.051 "enable_placement_id": 0, 00:24:16.051 "enable_zerocopy_send_server": true, 00:24:16.051 "enable_zerocopy_send_client": false, 00:24:16.051 "zerocopy_threshold": 0, 00:24:16.051 "tls_version": 0, 00:24:16.051 "enable_ktls": false 00:24:16.051 } 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "vmd", 00:24:16.051 "config": [] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "accel", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "accel_set_options", 00:24:16.051 "params": { 00:24:16.051 "small_cache_size": 128, 00:24:16.051 "large_cache_size": 16, 00:24:16.051 "task_count": 2048, 00:24:16.051 "sequence_count": 2048, 00:24:16.051 "buf_count": 2048 00:24:16.051 } 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "bdev", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "bdev_set_options", 00:24:16.051 "params": { 00:24:16.051 "bdev_io_pool_size": 65535, 00:24:16.051 "bdev_io_cache_size": 256, 00:24:16.051 "bdev_auto_examine": true, 00:24:16.051 "iobuf_small_cache_size": 128, 00:24:16.051 "iobuf_large_cache_size": 16 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_raid_set_options", 00:24:16.051 "params": { 00:24:16.051 "process_window_size_kb": 1024, 00:24:16.051 "process_max_bandwidth_mb_sec": 0 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_iscsi_set_options", 00:24:16.051 "params": { 00:24:16.051 "timeout_sec": 30 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_nvme_set_options", 00:24:16.051 "params": { 00:24:16.051 "action_on_timeout": "none", 00:24:16.051 "timeout_us": 0, 00:24:16.051 "timeout_admin_us": 0, 00:24:16.051 "keep_alive_timeout_ms": 10000, 00:24:16.051 "arbitration_burst": 0, 00:24:16.051 "low_priority_weight": 0, 00:24:16.051 "medium_priority_weight": 0, 00:24:16.051 "high_priority_weight": 0, 00:24:16.051 "nvme_adminq_poll_period_us": 10000, 00:24:16.051 "nvme_ioq_poll_period_us": 0, 00:24:16.051 "io_queue_requests": 0, 00:24:16.051 "delay_cmd_submit": true, 00:24:16.051 "transport_retry_count": 4, 00:24:16.051 "bdev_retry_count": 3, 00:24:16.051 "transport_ack_timeout": 0, 00:24:16.051 "ctrlr_loss_timeout_sec": 0, 00:24:16.051 "reconnect_delay_sec": 0, 00:24:16.051 "fast_io_fail_timeout_sec": 0, 00:24:16.051 "disable_auto_failback": false, 00:24:16.051 "generate_uuids": false, 00:24:16.051 "transport_tos": 0, 00:24:16.051 "nvme_error_stat": false, 00:24:16.051 "rdma_srq_size": 0, 00:24:16.051 "io_path_stat": false, 00:24:16.051 "allow_accel_sequence": false, 00:24:16.051 "rdma_max_cq_size": 0, 00:24:16.051 "rdma_cm_event_timeout_ms": 0, 00:24:16.051 "dhchap_digests": [ 00:24:16.051 "sha256", 00:24:16.051 "sha384", 00:24:16.051 "sha512" 00:24:16.051 ], 00:24:16.051 "dhchap_dhgroups": [ 00:24:16.051 "null", 00:24:16.051 "ffdhe2048", 00:24:16.051 "ffdhe3072", 00:24:16.051 "ffdhe4096", 00:24:16.051 "ffdhe6144", 00:24:16.051 "ffdhe8192" 00:24:16.051 ] 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_nvme_set_hotplug", 00:24:16.051 "params": { 00:24:16.051 "period_us": 100000, 00:24:16.051 "enable": false 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_malloc_create", 00:24:16.051 "params": { 00:24:16.051 "name": "malloc0", 00:24:16.051 "num_blocks": 8192, 00:24:16.051 "block_size": 4096, 00:24:16.051 "physical_block_size": 4096, 00:24:16.051 "uuid": "c294827c-bea9-4de4-a358-63009baa73f6", 00:24:16.051 "optimal_io_boundary": 0, 00:24:16.051 "md_size": 0, 00:24:16.051 "dif_type": 0, 00:24:16.051 "dif_is_head_of_md": false, 00:24:16.051 "dif_pi_format": 0 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "bdev_wait_for_examine" 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "nbd", 00:24:16.051 "config": [] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "scheduler", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "framework_set_scheduler", 00:24:16.051 "params": { 00:24:16.051 "name": "static" 00:24:16.051 } 00:24:16.051 } 00:24:16.051 ] 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "subsystem": "nvmf", 00:24:16.051 "config": [ 00:24:16.051 { 00:24:16.051 "method": "nvmf_set_config", 00:24:16.051 "params": { 00:24:16.051 "discovery_filter": "match_any", 00:24:16.051 "admin_cmd_passthru": { 00:24:16.051 "identify_ctrlr": false 00:24:16.051 }, 00:24:16.051 "dhchap_digests": [ 00:24:16.051 "sha256", 00:24:16.051 "sha384", 00:24:16.051 "sha512" 00:24:16.051 ], 00:24:16.051 "dhchap_dhgroups": [ 00:24:16.051 "null", 00:24:16.051 "ffdhe2048", 00:24:16.051 "ffdhe3072", 00:24:16.051 "ffdhe4096", 00:24:16.051 "ffdhe6144", 00:24:16.051 "ffdhe8192" 00:24:16.051 ] 00:24:16.051 } 00:24:16.051 }, 00:24:16.051 { 00:24:16.051 "method": "nvmf_set_max_subsystems", 00:24:16.051 "params": { 00:24:16.051 "max_subsystems": 1024 00:24:16.051 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_set_crdt", 00:24:16.052 "params": { 00:24:16.052 "crdt1": 0, 00:24:16.052 "crdt2": 0, 00:24:16.052 "crdt3": 0 00:24:16.052 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_create_transport", 00:24:16.052 "params": { 00:24:16.052 "trtype": "TCP", 00:24:16.052 "max_queue_depth": 128, 00:24:16.052 "max_io_qpairs_per_ctrlr": 127, 00:24:16.052 "in_capsule_data_size": 4096, 00:24:16.052 "max_io_size": 131072, 00:24:16.052 "io_unit_size": 131072, 00:24:16.052 "max_aq_depth": 128, 00:24:16.052 "num_shared_buffers": 511, 00:24:16.052 "buf_cache_size": 4294967295, 00:24:16.052 "dif_insert_or_strip": false, 00:24:16.052 "zcopy": false, 00:24:16.052 "c2h_success": false, 00:24:16.052 "sock_priority": 0, 00:24:16.052 "abort_timeout_sec": 1, 00:24:16.052 "ack_timeout": 0, 00:24:16.052 "data_wr_pool_size": 0 00:24:16.052 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_create_subsystem", 00:24:16.052 "params": { 00:24:16.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.052 "allow_any_host": false, 00:24:16.052 "serial_number": "SPDK00000000000001", 00:24:16.052 "model_number": "SPDK bdev Controller", 00:24:16.052 "max_namespaces": 10, 00:24:16.052 "min_cntlid": 1, 00:24:16.052 "max_cntlid": 65519, 00:24:16.052 "ana_reporting": false 00:24:16.052 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_subsystem_add_host", 00:24:16.052 "params": { 00:24:16.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.052 "host": "nqn.2016-06.io.spdk:host1", 00:24:16.052 "psk": "key0" 00:24:16.052 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_subsystem_add_ns", 00:24:16.052 "params": { 00:24:16.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.052 "namespace": { 00:24:16.052 "nsid": 1, 00:24:16.052 "bdev_name": "malloc0", 00:24:16.052 "nguid": "C294827CBEA94DE4A35863009BAA73F6", 00:24:16.052 "uuid": "c294827c-bea9-4de4-a358-63009baa73f6", 00:24:16.052 "no_auto_visible": false 00:24:16.052 } 00:24:16.052 } 00:24:16.052 }, 00:24:16.052 { 00:24:16.052 "method": "nvmf_subsystem_add_listener", 00:24:16.052 "params": { 00:24:16.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.052 "listen_address": { 00:24:16.052 "trtype": "TCP", 00:24:16.052 "adrfam": "IPv4", 00:24:16.052 "traddr": "10.0.0.2", 00:24:16.052 "trsvcid": "4420" 00:24:16.052 }, 00:24:16.052 "secure_channel": true 00:24:16.052 } 00:24:16.052 } 00:24:16.052 ] 00:24:16.052 } 00:24:16.052 ] 00:24:16.052 }' 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3903002 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3903002 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3903002 ']' 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:16.052 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.052 [2024-11-06 15:28:43.507983] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:16.052 [2024-11-06 15:28:43.508093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.052 [2024-11-06 15:28:43.635406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.312 [2024-11-06 15:28:43.737420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.312 [2024-11-06 15:28:43.737465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.312 [2024-11-06 15:28:43.737475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.312 [2024-11-06 15:28:43.737485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.312 [2024-11-06 15:28:43.737493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.312 [2024-11-06 15:28:43.738897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.879 [2024-11-06 15:28:44.236813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.879 [2024-11-06 15:28:44.268828] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.879 [2024-11-06 15:28:44.269064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.879 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:16.879 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:16.879 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3903042 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3903042 /var/tmp/bdevperf.sock 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3903042 ']' 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:16.880 "subsystems": [ 00:24:16.880 { 00:24:16.880 "subsystem": "keyring", 00:24:16.880 "config": [ 00:24:16.880 { 00:24:16.880 "method": "keyring_file_add_key", 00:24:16.880 "params": { 00:24:16.880 "name": "key0", 00:24:16.880 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:16.880 } 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "iobuf", 00:24:16.880 "config": [ 00:24:16.880 { 00:24:16.880 "method": "iobuf_set_options", 00:24:16.880 "params": { 00:24:16.880 "small_pool_count": 8192, 00:24:16.880 "large_pool_count": 1024, 00:24:16.880 "small_bufsize": 8192, 00:24:16.880 "large_bufsize": 135168, 00:24:16.880 "enable_numa": false 00:24:16.880 } 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "sock", 00:24:16.880 "config": [ 00:24:16.880 { 00:24:16.880 "method": "sock_set_default_impl", 00:24:16.880 "params": { 00:24:16.880 "impl_name": "posix" 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "sock_impl_set_options", 00:24:16.880 "params": { 00:24:16.880 "impl_name": "ssl", 00:24:16.880 "recv_buf_size": 4096, 00:24:16.880 "send_buf_size": 4096, 00:24:16.880 "enable_recv_pipe": true, 00:24:16.880 "enable_quickack": false, 00:24:16.880 "enable_placement_id": 0, 00:24:16.880 "enable_zerocopy_send_server": true, 00:24:16.880 "enable_zerocopy_send_client": false, 00:24:16.880 "zerocopy_threshold": 0, 00:24:16.880 "tls_version": 0, 00:24:16.880 "enable_ktls": false 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "sock_impl_set_options", 00:24:16.880 "params": { 00:24:16.880 "impl_name": "posix", 00:24:16.880 "recv_buf_size": 2097152, 00:24:16.880 "send_buf_size": 2097152, 00:24:16.880 "enable_recv_pipe": true, 00:24:16.880 "enable_quickack": false, 00:24:16.880 "enable_placement_id": 0, 00:24:16.880 "enable_zerocopy_send_server": true, 00:24:16.880 "enable_zerocopy_send_client": false, 00:24:16.880 "zerocopy_threshold": 0, 00:24:16.880 "tls_version": 0, 00:24:16.880 "enable_ktls": false 00:24:16.880 } 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "vmd", 00:24:16.880 "config": [] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "accel", 00:24:16.880 "config": [ 00:24:16.880 { 00:24:16.880 "method": "accel_set_options", 00:24:16.880 "params": { 00:24:16.880 "small_cache_size": 128, 00:24:16.880 "large_cache_size": 16, 00:24:16.880 "task_count": 2048, 00:24:16.880 "sequence_count": 2048, 00:24:16.880 "buf_count": 2048 00:24:16.880 } 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "bdev", 00:24:16.880 "config": [ 00:24:16.880 { 00:24:16.880 "method": "bdev_set_options", 00:24:16.880 "params": { 00:24:16.880 "bdev_io_pool_size": 65535, 00:24:16.880 "bdev_io_cache_size": 256, 00:24:16.880 "bdev_auto_examine": true, 00:24:16.880 "iobuf_small_cache_size": 128, 00:24:16.880 "iobuf_large_cache_size": 16 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_raid_set_options", 00:24:16.880 "params": { 00:24:16.880 "process_window_size_kb": 1024, 00:24:16.880 "process_max_bandwidth_mb_sec": 0 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_iscsi_set_options", 00:24:16.880 "params": { 00:24:16.880 "timeout_sec": 30 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_nvme_set_options", 00:24:16.880 "params": { 00:24:16.880 "action_on_timeout": "none", 00:24:16.880 "timeout_us": 0, 00:24:16.880 "timeout_admin_us": 0, 00:24:16.880 "keep_alive_timeout_ms": 10000, 00:24:16.880 "arbitration_burst": 0, 00:24:16.880 "low_priority_weight": 0, 00:24:16.880 "medium_priority_weight": 0, 00:24:16.880 "high_priority_weight": 0, 00:24:16.880 "nvme_adminq_poll_period_us": 10000, 00:24:16.880 "nvme_ioq_poll_period_us": 0, 00:24:16.880 "io_queue_requests": 512, 00:24:16.880 "delay_cmd_submit": true, 00:24:16.880 "transport_retry_count": 4, 00:24:16.880 "bdev_retry_count": 3, 00:24:16.880 "transport_ack_timeout": 0, 00:24:16.880 "ctrlr_loss_timeout_sec": 0, 00:24:16.880 "reconnect_delay_sec": 0, 00:24:16.880 "fast_io_fail_timeout_sec": 0, 00:24:16.880 "disable_auto_failback": false, 00:24:16.880 "generate_uuids": false, 00:24:16.880 "transport_tos": 0, 00:24:16.880 "nvme_error_stat": false, 00:24:16.880 "rdma_srq_size": 0, 00:24:16.880 "io_path_stat": false, 00:24:16.880 "allow_accel_sequence": false, 00:24:16.880 "rdma_max_cq_size": 0, 00:24:16.880 "rdma_cm_event_timeout_ms": 0, 00:24:16.880 "dhchap_digests": [ 00:24:16.880 "sha256", 00:24:16.880 "sha384", 00:24:16.880 "sha512" 00:24:16.880 ], 00:24:16.880 "dhchap_dhgroups": [ 00:24:16.880 "null", 00:24:16.880 "ffdhe2048", 00:24:16.880 "ffdhe3072", 00:24:16.880 "ffdhe4096", 00:24:16.880 "ffdhe6144", 00:24:16.880 "ffdhe8192" 00:24:16.880 ] 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_nvme_attach_controller", 00:24:16.880 "params": { 00:24:16.880 "name": "TLSTEST", 00:24:16.880 "trtype": "TCP", 00:24:16.880 "adrfam": "IPv4", 00:24:16.880 "traddr": "10.0.0.2", 00:24:16.880 "trsvcid": "4420", 00:24:16.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.880 "prchk_reftag": false, 00:24:16.880 "prchk_guard": false, 00:24:16.880 "ctrlr_loss_timeout_sec": 0, 00:24:16.880 "reconnect_delay_sec": 0, 00:24:16.880 "fast_io_fail_timeout_sec": 0, 00:24:16.880 "psk": "key0", 00:24:16.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.880 "hdgst": false, 00:24:16.880 "ddgst": false, 00:24:16.880 "multipath": "multipath" 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_nvme_set_hotplug", 00:24:16.880 "params": { 00:24:16.880 "period_us": 100000, 00:24:16.880 "enable": false 00:24:16.880 } 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "method": "bdev_wait_for_examine" 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }, 00:24:16.880 { 00:24:16.880 "subsystem": "nbd", 00:24:16.880 "config": [] 00:24:16.880 } 00:24:16.880 ] 00:24:16.880 }' 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:16.880 15:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.880 [2024-11-06 15:28:44.420593] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:16.880 [2024-11-06 15:28:44.420699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903042 ] 00:24:17.139 [2024-11-06 15:28:44.549919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.139 [2024-11-06 15:28:44.662403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.707 [2024-11-06 15:28:45.057883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.707 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.707 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:17.707 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:17.707 Running I/O for 10 seconds... 00:24:20.019 4460.00 IOPS, 17.42 MiB/s [2024-11-06T14:28:48.594Z] 4534.50 IOPS, 17.71 MiB/s [2024-11-06T14:28:49.531Z] 4586.00 IOPS, 17.91 MiB/s [2024-11-06T14:28:50.466Z] 4625.00 IOPS, 18.07 MiB/s [2024-11-06T14:28:51.403Z] 4606.80 IOPS, 18.00 MiB/s [2024-11-06T14:28:52.779Z] 4620.00 IOPS, 18.05 MiB/s [2024-11-06T14:28:53.715Z] 4585.71 IOPS, 17.91 MiB/s [2024-11-06T14:28:54.650Z] 4608.00 IOPS, 18.00 MiB/s [2024-11-06T14:28:55.589Z] 4611.89 IOPS, 18.02 MiB/s [2024-11-06T14:28:55.589Z] 4625.90 IOPS, 18.07 MiB/s 00:24:27.951 Latency(us) 00:24:27.951 [2024-11-06T14:28:55.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.951 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:27.951 Verification LBA range: start 0x0 length 0x2000 00:24:27.951 TLSTESTn1 : 10.02 4631.30 18.09 0.00 0.00 27596.28 5742.20 41943.04 00:24:27.951 [2024-11-06T14:28:55.589Z] =================================================================================================================== 00:24:27.951 [2024-11-06T14:28:55.589Z] Total : 4631.30 18.09 0.00 0.00 27596.28 5742.20 41943.04 00:24:27.951 { 00:24:27.951 "results": [ 00:24:27.951 { 00:24:27.951 "job": "TLSTESTn1", 00:24:27.951 "core_mask": "0x4", 00:24:27.951 "workload": "verify", 00:24:27.951 "status": "finished", 00:24:27.951 "verify_range": { 00:24:27.951 "start": 0, 00:24:27.951 "length": 8192 00:24:27.951 }, 00:24:27.951 "queue_depth": 128, 00:24:27.951 "io_size": 4096, 00:24:27.951 "runtime": 10.015112, 00:24:27.951 "iops": 4631.30117766032, 00:24:27.951 "mibps": 18.091020225235624, 00:24:27.951 "io_failed": 0, 00:24:27.951 "io_timeout": 0, 00:24:27.951 "avg_latency_us": 27596.27590124871, 00:24:27.951 "min_latency_us": 5742.201904761905, 00:24:27.951 "max_latency_us": 41943.04 00:24:27.951 } 00:24:27.951 ], 00:24:27.951 "core_count": 1 00:24:27.951 } 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3903042 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3903042 ']' 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3903042 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3903042 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3903042' 00:24:27.951 killing process with pid 3903042 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3903042 00:24:27.951 Received shutdown signal, test time was about 10.000000 seconds 00:24:27.951 00:24:27.951 Latency(us) 00:24:27.951 [2024-11-06T14:28:55.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.951 [2024-11-06T14:28:55.589Z] =================================================================================================================== 00:24:27.951 [2024-11-06T14:28:55.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.951 15:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3903042 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3903002 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3903002 ']' 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3903002 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3903002 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3903002' 00:24:28.889 killing process with pid 3903002 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3903002 00:24:28.889 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3903002 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3905301 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3905301 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3905301 ']' 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:30.268 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.268 [2024-11-06 15:28:57.687080] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:30.268 [2024-11-06 15:28:57.687192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.268 [2024-11-06 15:28:57.813135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.527 [2024-11-06 15:28:57.914246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.527 [2024-11-06 15:28:57.914296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.527 [2024-11-06 15:28:57.914306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.527 [2024-11-06 15:28:57.914318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.527 [2024-11-06 15:28:57.914326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.527 [2024-11-06 15:28:57.915863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.3NDmuKDv3S 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3NDmuKDv3S 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:31.095 [2024-11-06 15:28:58.689468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.095 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:31.354 15:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:31.613 [2024-11-06 15:28:59.074519] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.613 [2024-11-06 15:28:59.074769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.613 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:31.872 malloc0 00:24:31.872 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:31.872 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:32.139 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3905587 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3905587 /var/tmp/bdevperf.sock 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3905587 ']' 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:32.401 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.401 [2024-11-06 15:28:59.921153] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:32.401 [2024-11-06 15:28:59.921260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905587 ] 00:24:32.660 [2024-11-06 15:29:00.050035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.660 [2024-11-06 15:29:00.167672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.228 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.228 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:33.228 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:33.487 15:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:33.487 [2024-11-06 15:29:01.076936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.745 nvme0n1 00:24:33.745 15:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.745 Running I/O for 1 seconds... 00:24:34.683 4544.00 IOPS, 17.75 MiB/s 00:24:34.683 Latency(us) 00:24:34.683 [2024-11-06T14:29:02.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.683 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.683 Verification LBA range: start 0x0 length 0x2000 00:24:34.683 nvme0n1 : 1.02 4575.06 17.87 0.00 0.00 27720.28 7583.45 26963.38 00:24:34.683 [2024-11-06T14:29:02.321Z] =================================================================================================================== 00:24:34.683 [2024-11-06T14:29:02.321Z] Total : 4575.06 17.87 0.00 0.00 27720.28 7583.45 26963.38 00:24:34.683 { 00:24:34.683 "results": [ 00:24:34.683 { 00:24:34.683 "job": "nvme0n1", 00:24:34.683 "core_mask": "0x2", 00:24:34.683 "workload": "verify", 00:24:34.683 "status": "finished", 00:24:34.683 "verify_range": { 00:24:34.683 "start": 0, 00:24:34.683 "length": 8192 00:24:34.683 }, 00:24:34.683 "queue_depth": 128, 00:24:34.683 "io_size": 4096, 00:24:34.683 "runtime": 1.021188, 00:24:34.683 "iops": 4575.063553429926, 00:24:34.683 "mibps": 17.87134200558565, 00:24:34.683 "io_failed": 0, 00:24:34.683 "io_timeout": 0, 00:24:34.683 "avg_latency_us": 27720.281174168296, 00:24:34.683 "min_latency_us": 7583.451428571429, 00:24:34.683 "max_latency_us": 26963.382857142857 00:24:34.683 } 00:24:34.683 ], 00:24:34.683 "core_count": 1 00:24:34.683 } 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3905587 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3905587 ']' 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3905587 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:34.683 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3905587 00:24:34.942 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:34.942 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:34.942 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3905587' 00:24:34.942 killing process with pid 3905587 00:24:34.942 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3905587 00:24:34.942 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.942 00:24:34.942 Latency(us) 00:24:34.942 [2024-11-06T14:29:02.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.942 [2024-11-06T14:29:02.580Z] =================================================================================================================== 00:24:34.942 [2024-11-06T14:29:02.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.942 15:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3905587 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3905301 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3905301 ']' 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3905301 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3905301 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3905301' 00:24:35.879 killing process with pid 3905301 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3905301 00:24:35.879 15:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3905301 00:24:36.816 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:36.816 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.816 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:36.816 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3906337 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3906337 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3906337 ']' 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.074 15:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.074 [2024-11-06 15:29:04.540172] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:37.074 [2024-11-06 15:29:04.540283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.074 [2024-11-06 15:29:04.674159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.333 [2024-11-06 15:29:04.779825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.333 [2024-11-06 15:29:04.779870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.333 [2024-11-06 15:29:04.779880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.333 [2024-11-06 15:29:04.779891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.333 [2024-11-06 15:29:04.779899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.333 [2024-11-06 15:29:04.781372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 [2024-11-06 15:29:05.392211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.939 malloc0 00:24:37.939 [2024-11-06 15:29:05.447281] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.939 [2024-11-06 15:29:05.447538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3906537 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3906537 /var/tmp/bdevperf.sock 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3906537 ']' 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.939 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 [2024-11-06 15:29:05.550131] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:37.939 [2024-11-06 15:29:05.550215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906537 ] 00:24:38.258 [2024-11-06 15:29:05.675354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.258 [2024-11-06 15:29:05.783244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.825 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.825 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:38.825 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3NDmuKDv3S 00:24:39.084 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:39.084 [2024-11-06 15:29:06.699362] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.343 nvme0n1 00:24:39.343 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.343 Running I/O for 1 seconds... 00:24:40.279 4481.00 IOPS, 17.50 MiB/s 00:24:40.279 Latency(us) 00:24:40.279 [2024-11-06T14:29:07.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.279 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:40.279 Verification LBA range: start 0x0 length 0x2000 00:24:40.279 nvme0n1 : 1.01 4543.98 17.75 0.00 0.00 27959.98 5274.09 30458.64 00:24:40.279 [2024-11-06T14:29:07.917Z] =================================================================================================================== 00:24:40.279 [2024-11-06T14:29:07.917Z] Total : 4543.98 17.75 0.00 0.00 27959.98 5274.09 30458.64 00:24:40.279 { 00:24:40.279 "results": [ 00:24:40.279 { 00:24:40.279 "job": "nvme0n1", 00:24:40.279 "core_mask": "0x2", 00:24:40.279 "workload": "verify", 00:24:40.279 "status": "finished", 00:24:40.279 "verify_range": { 00:24:40.279 "start": 0, 00:24:40.279 "length": 8192 00:24:40.279 }, 00:24:40.279 "queue_depth": 128, 00:24:40.279 "io_size": 4096, 00:24:40.279 "runtime": 1.01453, 00:24:40.279 "iops": 4543.976028308675, 00:24:40.279 "mibps": 17.749906360580763, 00:24:40.279 "io_failed": 0, 00:24:40.279 "io_timeout": 0, 00:24:40.279 "avg_latency_us": 27959.98226298936, 00:24:40.279 "min_latency_us": 5274.087619047619, 00:24:40.279 "max_latency_us": 30458.63619047619 00:24:40.279 } 00:24:40.279 ], 00:24:40.279 "core_count": 1 00:24:40.279 } 00:24:40.538 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:40.538 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.538 15:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.538 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.538 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:40.538 "subsystems": [ 00:24:40.538 { 00:24:40.538 "subsystem": "keyring", 00:24:40.538 "config": [ 00:24:40.538 { 00:24:40.538 "method": "keyring_file_add_key", 00:24:40.538 "params": { 00:24:40.538 "name": "key0", 00:24:40.538 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:40.538 } 00:24:40.538 } 00:24:40.538 ] 00:24:40.538 }, 00:24:40.538 { 00:24:40.538 "subsystem": "iobuf", 00:24:40.538 "config": [ 00:24:40.538 { 00:24:40.538 "method": "iobuf_set_options", 00:24:40.538 "params": { 00:24:40.538 "small_pool_count": 8192, 00:24:40.538 "large_pool_count": 1024, 00:24:40.538 "small_bufsize": 8192, 00:24:40.538 "large_bufsize": 135168, 00:24:40.538 "enable_numa": false 00:24:40.538 } 00:24:40.538 } 00:24:40.538 ] 00:24:40.538 }, 00:24:40.538 { 00:24:40.538 "subsystem": "sock", 00:24:40.538 "config": [ 00:24:40.538 { 00:24:40.538 "method": "sock_set_default_impl", 00:24:40.538 "params": { 00:24:40.538 "impl_name": "posix" 00:24:40.538 } 00:24:40.538 }, 00:24:40.538 { 00:24:40.538 "method": "sock_impl_set_options", 00:24:40.538 "params": { 00:24:40.538 "impl_name": "ssl", 00:24:40.538 "recv_buf_size": 4096, 00:24:40.538 "send_buf_size": 4096, 00:24:40.538 "enable_recv_pipe": true, 00:24:40.538 "enable_quickack": false, 00:24:40.538 "enable_placement_id": 0, 00:24:40.538 "enable_zerocopy_send_server": true, 00:24:40.538 "enable_zerocopy_send_client": false, 00:24:40.538 "zerocopy_threshold": 0, 00:24:40.538 "tls_version": 0, 00:24:40.538 "enable_ktls": false 00:24:40.538 } 00:24:40.538 }, 00:24:40.538 { 00:24:40.538 "method": "sock_impl_set_options", 00:24:40.538 "params": { 00:24:40.538 "impl_name": "posix", 00:24:40.538 "recv_buf_size": 2097152, 00:24:40.538 "send_buf_size": 2097152, 00:24:40.539 "enable_recv_pipe": true, 00:24:40.539 "enable_quickack": false, 00:24:40.539 "enable_placement_id": 0, 00:24:40.539 "enable_zerocopy_send_server": true, 00:24:40.539 "enable_zerocopy_send_client": false, 00:24:40.539 "zerocopy_threshold": 0, 00:24:40.539 "tls_version": 0, 00:24:40.539 "enable_ktls": false 00:24:40.539 } 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "vmd", 00:24:40.539 "config": [] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "accel", 00:24:40.539 "config": [ 00:24:40.539 { 00:24:40.539 "method": "accel_set_options", 00:24:40.539 "params": { 00:24:40.539 "small_cache_size": 128, 00:24:40.539 "large_cache_size": 16, 00:24:40.539 "task_count": 2048, 00:24:40.539 "sequence_count": 2048, 00:24:40.539 "buf_count": 2048 00:24:40.539 } 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "bdev", 00:24:40.539 "config": [ 00:24:40.539 { 00:24:40.539 "method": "bdev_set_options", 00:24:40.539 "params": { 00:24:40.539 "bdev_io_pool_size": 65535, 00:24:40.539 "bdev_io_cache_size": 256, 00:24:40.539 "bdev_auto_examine": true, 00:24:40.539 "iobuf_small_cache_size": 128, 00:24:40.539 "iobuf_large_cache_size": 16 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_raid_set_options", 00:24:40.539 "params": { 00:24:40.539 "process_window_size_kb": 1024, 00:24:40.539 "process_max_bandwidth_mb_sec": 0 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_iscsi_set_options", 00:24:40.539 "params": { 00:24:40.539 "timeout_sec": 30 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_nvme_set_options", 00:24:40.539 "params": { 00:24:40.539 "action_on_timeout": "none", 00:24:40.539 "timeout_us": 0, 00:24:40.539 "timeout_admin_us": 0, 00:24:40.539 "keep_alive_timeout_ms": 10000, 00:24:40.539 "arbitration_burst": 0, 00:24:40.539 "low_priority_weight": 0, 00:24:40.539 "medium_priority_weight": 0, 00:24:40.539 "high_priority_weight": 0, 00:24:40.539 "nvme_adminq_poll_period_us": 10000, 00:24:40.539 "nvme_ioq_poll_period_us": 0, 00:24:40.539 "io_queue_requests": 0, 00:24:40.539 "delay_cmd_submit": true, 00:24:40.539 "transport_retry_count": 4, 00:24:40.539 "bdev_retry_count": 3, 00:24:40.539 "transport_ack_timeout": 0, 00:24:40.539 "ctrlr_loss_timeout_sec": 0, 00:24:40.539 "reconnect_delay_sec": 0, 00:24:40.539 "fast_io_fail_timeout_sec": 0, 00:24:40.539 "disable_auto_failback": false, 00:24:40.539 "generate_uuids": false, 00:24:40.539 "transport_tos": 0, 00:24:40.539 "nvme_error_stat": false, 00:24:40.539 "rdma_srq_size": 0, 00:24:40.539 "io_path_stat": false, 00:24:40.539 "allow_accel_sequence": false, 00:24:40.539 "rdma_max_cq_size": 0, 00:24:40.539 "rdma_cm_event_timeout_ms": 0, 00:24:40.539 "dhchap_digests": [ 00:24:40.539 "sha256", 00:24:40.539 "sha384", 00:24:40.539 "sha512" 00:24:40.539 ], 00:24:40.539 "dhchap_dhgroups": [ 00:24:40.539 "null", 00:24:40.539 "ffdhe2048", 00:24:40.539 "ffdhe3072", 00:24:40.539 "ffdhe4096", 00:24:40.539 "ffdhe6144", 00:24:40.539 "ffdhe8192" 00:24:40.539 ] 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_nvme_set_hotplug", 00:24:40.539 "params": { 00:24:40.539 "period_us": 100000, 00:24:40.539 "enable": false 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_malloc_create", 00:24:40.539 "params": { 00:24:40.539 "name": "malloc0", 00:24:40.539 "num_blocks": 8192, 00:24:40.539 "block_size": 4096, 00:24:40.539 "physical_block_size": 4096, 00:24:40.539 "uuid": "0da466d8-3269-4835-8770-96c23fb55014", 00:24:40.539 "optimal_io_boundary": 0, 00:24:40.539 "md_size": 0, 00:24:40.539 "dif_type": 0, 00:24:40.539 "dif_is_head_of_md": false, 00:24:40.539 "dif_pi_format": 0 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "bdev_wait_for_examine" 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "nbd", 00:24:40.539 "config": [] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "scheduler", 00:24:40.539 "config": [ 00:24:40.539 { 00:24:40.539 "method": "framework_set_scheduler", 00:24:40.539 "params": { 00:24:40.539 "name": "static" 00:24:40.539 } 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "subsystem": "nvmf", 00:24:40.539 "config": [ 00:24:40.539 { 00:24:40.539 "method": "nvmf_set_config", 00:24:40.539 "params": { 00:24:40.539 "discovery_filter": "match_any", 00:24:40.539 "admin_cmd_passthru": { 00:24:40.539 "identify_ctrlr": false 00:24:40.539 }, 00:24:40.539 "dhchap_digests": [ 00:24:40.539 "sha256", 00:24:40.539 "sha384", 00:24:40.539 "sha512" 00:24:40.539 ], 00:24:40.539 "dhchap_dhgroups": [ 00:24:40.539 "null", 00:24:40.539 "ffdhe2048", 00:24:40.539 "ffdhe3072", 00:24:40.539 "ffdhe4096", 00:24:40.539 "ffdhe6144", 00:24:40.539 "ffdhe8192" 00:24:40.539 ] 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_set_max_subsystems", 00:24:40.539 "params": { 00:24:40.539 "max_subsystems": 1024 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_set_crdt", 00:24:40.539 "params": { 00:24:40.539 "crdt1": 0, 00:24:40.539 "crdt2": 0, 00:24:40.539 "crdt3": 0 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_create_transport", 00:24:40.539 "params": { 00:24:40.539 "trtype": "TCP", 00:24:40.539 "max_queue_depth": 128, 00:24:40.539 "max_io_qpairs_per_ctrlr": 127, 00:24:40.539 "in_capsule_data_size": 4096, 00:24:40.539 "max_io_size": 131072, 00:24:40.539 "io_unit_size": 131072, 00:24:40.539 "max_aq_depth": 128, 00:24:40.539 "num_shared_buffers": 511, 00:24:40.539 "buf_cache_size": 4294967295, 00:24:40.539 "dif_insert_or_strip": false, 00:24:40.539 "zcopy": false, 00:24:40.539 "c2h_success": false, 00:24:40.539 "sock_priority": 0, 00:24:40.539 "abort_timeout_sec": 1, 00:24:40.539 "ack_timeout": 0, 00:24:40.539 "data_wr_pool_size": 0 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_create_subsystem", 00:24:40.539 "params": { 00:24:40.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.539 "allow_any_host": false, 00:24:40.539 "serial_number": "00000000000000000000", 00:24:40.539 "model_number": "SPDK bdev Controller", 00:24:40.539 "max_namespaces": 32, 00:24:40.539 "min_cntlid": 1, 00:24:40.539 "max_cntlid": 65519, 00:24:40.539 "ana_reporting": false 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_subsystem_add_host", 00:24:40.539 "params": { 00:24:40.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.539 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.539 "psk": "key0" 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_subsystem_add_ns", 00:24:40.539 "params": { 00:24:40.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.539 "namespace": { 00:24:40.539 "nsid": 1, 00:24:40.539 "bdev_name": "malloc0", 00:24:40.539 "nguid": "0DA466D832694835877096C23FB55014", 00:24:40.539 "uuid": "0da466d8-3269-4835-8770-96c23fb55014", 00:24:40.539 "no_auto_visible": false 00:24:40.539 } 00:24:40.539 } 00:24:40.539 }, 00:24:40.539 { 00:24:40.539 "method": "nvmf_subsystem_add_listener", 00:24:40.539 "params": { 00:24:40.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.539 "listen_address": { 00:24:40.539 "trtype": "TCP", 00:24:40.539 "adrfam": "IPv4", 00:24:40.539 "traddr": "10.0.0.2", 00:24:40.539 "trsvcid": "4420" 00:24:40.539 }, 00:24:40.539 "secure_channel": false, 00:24:40.539 "sock_impl": "ssl" 00:24:40.539 } 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 } 00:24:40.539 ] 00:24:40.539 }' 00:24:40.539 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:40.798 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:40.798 "subsystems": [ 00:24:40.798 { 00:24:40.798 "subsystem": "keyring", 00:24:40.798 "config": [ 00:24:40.798 { 00:24:40.798 "method": "keyring_file_add_key", 00:24:40.798 "params": { 00:24:40.798 "name": "key0", 00:24:40.798 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:40.798 } 00:24:40.798 } 00:24:40.798 ] 00:24:40.798 }, 00:24:40.798 { 00:24:40.798 "subsystem": "iobuf", 00:24:40.798 "config": [ 00:24:40.798 { 00:24:40.798 "method": "iobuf_set_options", 00:24:40.798 "params": { 00:24:40.798 "small_pool_count": 8192, 00:24:40.798 "large_pool_count": 1024, 00:24:40.798 "small_bufsize": 8192, 00:24:40.798 "large_bufsize": 135168, 00:24:40.798 "enable_numa": false 00:24:40.798 } 00:24:40.798 } 00:24:40.798 ] 00:24:40.798 }, 00:24:40.798 { 00:24:40.798 "subsystem": "sock", 00:24:40.798 "config": [ 00:24:40.798 { 00:24:40.798 "method": "sock_set_default_impl", 00:24:40.798 "params": { 00:24:40.798 "impl_name": "posix" 00:24:40.798 } 00:24:40.798 }, 00:24:40.798 { 00:24:40.798 "method": "sock_impl_set_options", 00:24:40.798 "params": { 00:24:40.798 "impl_name": "ssl", 00:24:40.798 "recv_buf_size": 4096, 00:24:40.798 "send_buf_size": 4096, 00:24:40.798 "enable_recv_pipe": true, 00:24:40.798 "enable_quickack": false, 00:24:40.798 "enable_placement_id": 0, 00:24:40.798 "enable_zerocopy_send_server": true, 00:24:40.798 "enable_zerocopy_send_client": false, 00:24:40.798 "zerocopy_threshold": 0, 00:24:40.798 "tls_version": 0, 00:24:40.798 "enable_ktls": false 00:24:40.798 } 00:24:40.798 }, 00:24:40.798 { 00:24:40.798 "method": "sock_impl_set_options", 00:24:40.798 "params": { 00:24:40.798 "impl_name": "posix", 00:24:40.798 "recv_buf_size": 2097152, 00:24:40.798 "send_buf_size": 2097152, 00:24:40.798 "enable_recv_pipe": true, 00:24:40.798 "enable_quickack": false, 00:24:40.798 "enable_placement_id": 0, 00:24:40.798 "enable_zerocopy_send_server": true, 00:24:40.799 "enable_zerocopy_send_client": false, 00:24:40.799 "zerocopy_threshold": 0, 00:24:40.799 "tls_version": 0, 00:24:40.799 "enable_ktls": false 00:24:40.799 } 00:24:40.799 } 00:24:40.799 ] 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "subsystem": "vmd", 00:24:40.799 "config": [] 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "subsystem": "accel", 00:24:40.799 "config": [ 00:24:40.799 { 00:24:40.799 "method": "accel_set_options", 00:24:40.799 "params": { 00:24:40.799 "small_cache_size": 128, 00:24:40.799 "large_cache_size": 16, 00:24:40.799 "task_count": 2048, 00:24:40.799 "sequence_count": 2048, 00:24:40.799 "buf_count": 2048 00:24:40.799 } 00:24:40.799 } 00:24:40.799 ] 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "subsystem": "bdev", 00:24:40.799 "config": [ 00:24:40.799 { 00:24:40.799 "method": "bdev_set_options", 00:24:40.799 "params": { 00:24:40.799 "bdev_io_pool_size": 65535, 00:24:40.799 "bdev_io_cache_size": 256, 00:24:40.799 "bdev_auto_examine": true, 00:24:40.799 "iobuf_small_cache_size": 128, 00:24:40.799 "iobuf_large_cache_size": 16 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_raid_set_options", 00:24:40.799 "params": { 00:24:40.799 "process_window_size_kb": 1024, 00:24:40.799 "process_max_bandwidth_mb_sec": 0 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_iscsi_set_options", 00:24:40.799 "params": { 00:24:40.799 "timeout_sec": 30 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_nvme_set_options", 00:24:40.799 "params": { 00:24:40.799 "action_on_timeout": "none", 00:24:40.799 "timeout_us": 0, 00:24:40.799 "timeout_admin_us": 0, 00:24:40.799 "keep_alive_timeout_ms": 10000, 00:24:40.799 "arbitration_burst": 0, 00:24:40.799 "low_priority_weight": 0, 00:24:40.799 "medium_priority_weight": 0, 00:24:40.799 "high_priority_weight": 0, 00:24:40.799 "nvme_adminq_poll_period_us": 10000, 00:24:40.799 "nvme_ioq_poll_period_us": 0, 00:24:40.799 "io_queue_requests": 512, 00:24:40.799 "delay_cmd_submit": true, 00:24:40.799 "transport_retry_count": 4, 00:24:40.799 "bdev_retry_count": 3, 00:24:40.799 "transport_ack_timeout": 0, 00:24:40.799 "ctrlr_loss_timeout_sec": 0, 00:24:40.799 "reconnect_delay_sec": 0, 00:24:40.799 "fast_io_fail_timeout_sec": 0, 00:24:40.799 "disable_auto_failback": false, 00:24:40.799 "generate_uuids": false, 00:24:40.799 "transport_tos": 0, 00:24:40.799 "nvme_error_stat": false, 00:24:40.799 "rdma_srq_size": 0, 00:24:40.799 "io_path_stat": false, 00:24:40.799 "allow_accel_sequence": false, 00:24:40.799 "rdma_max_cq_size": 0, 00:24:40.799 "rdma_cm_event_timeout_ms": 0, 00:24:40.799 "dhchap_digests": [ 00:24:40.799 "sha256", 00:24:40.799 "sha384", 00:24:40.799 "sha512" 00:24:40.799 ], 00:24:40.799 "dhchap_dhgroups": [ 00:24:40.799 "null", 00:24:40.799 "ffdhe2048", 00:24:40.799 "ffdhe3072", 00:24:40.799 "ffdhe4096", 00:24:40.799 "ffdhe6144", 00:24:40.799 "ffdhe8192" 00:24:40.799 ] 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_nvme_attach_controller", 00:24:40.799 "params": { 00:24:40.799 "name": "nvme0", 00:24:40.799 "trtype": "TCP", 00:24:40.799 "adrfam": "IPv4", 00:24:40.799 "traddr": "10.0.0.2", 00:24:40.799 "trsvcid": "4420", 00:24:40.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.799 "prchk_reftag": false, 00:24:40.799 "prchk_guard": false, 00:24:40.799 "ctrlr_loss_timeout_sec": 0, 00:24:40.799 "reconnect_delay_sec": 0, 00:24:40.799 "fast_io_fail_timeout_sec": 0, 00:24:40.799 "psk": "key0", 00:24:40.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.799 "hdgst": false, 00:24:40.799 "ddgst": false, 00:24:40.799 "multipath": "multipath" 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_nvme_set_hotplug", 00:24:40.799 "params": { 00:24:40.799 "period_us": 100000, 00:24:40.799 "enable": false 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_enable_histogram", 00:24:40.799 "params": { 00:24:40.799 "name": "nvme0n1", 00:24:40.799 "enable": true 00:24:40.799 } 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "method": "bdev_wait_for_examine" 00:24:40.799 } 00:24:40.799 ] 00:24:40.799 }, 00:24:40.799 { 00:24:40.799 "subsystem": "nbd", 00:24:40.799 "config": [] 00:24:40.799 } 00:24:40.799 ] 00:24:40.799 }' 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3906537 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3906537 ']' 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3906537 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906537 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906537' 00:24:40.799 killing process with pid 3906537 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3906537 00:24:40.799 Received shutdown signal, test time was about 1.000000 seconds 00:24:40.799 00:24:40.799 Latency(us) 00:24:40.799 [2024-11-06T14:29:08.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.799 [2024-11-06T14:29:08.437Z] =================================================================================================================== 00:24:40.799 [2024-11-06T14:29:08.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.799 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3906537 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3906337 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3906337 ']' 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3906337 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3906337 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3906337' 00:24:41.736 killing process with pid 3906337 00:24:41.736 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3906337 00:24:41.737 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3906337 00:24:43.112 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:43.112 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.112 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:43.112 "subsystems": [ 00:24:43.112 { 00:24:43.112 "subsystem": "keyring", 00:24:43.112 "config": [ 00:24:43.112 { 00:24:43.112 "method": "keyring_file_add_key", 00:24:43.112 "params": { 00:24:43.112 "name": "key0", 00:24:43.112 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:43.112 } 00:24:43.112 } 00:24:43.112 ] 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "subsystem": "iobuf", 00:24:43.112 "config": [ 00:24:43.112 { 00:24:43.112 "method": "iobuf_set_options", 00:24:43.112 "params": { 00:24:43.112 "small_pool_count": 8192, 00:24:43.112 "large_pool_count": 1024, 00:24:43.112 "small_bufsize": 8192, 00:24:43.112 "large_bufsize": 135168, 00:24:43.112 "enable_numa": false 00:24:43.112 } 00:24:43.112 } 00:24:43.112 ] 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "subsystem": "sock", 00:24:43.112 "config": [ 00:24:43.112 { 00:24:43.112 "method": "sock_set_default_impl", 00:24:43.112 "params": { 00:24:43.112 "impl_name": "posix" 00:24:43.112 } 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "method": "sock_impl_set_options", 00:24:43.112 "params": { 00:24:43.112 "impl_name": "ssl", 00:24:43.112 "recv_buf_size": 4096, 00:24:43.112 "send_buf_size": 4096, 00:24:43.112 "enable_recv_pipe": true, 00:24:43.112 "enable_quickack": false, 00:24:43.112 "enable_placement_id": 0, 00:24:43.112 "enable_zerocopy_send_server": true, 00:24:43.112 "enable_zerocopy_send_client": false, 00:24:43.112 "zerocopy_threshold": 0, 00:24:43.112 "tls_version": 0, 00:24:43.112 "enable_ktls": false 00:24:43.112 } 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "method": "sock_impl_set_options", 00:24:43.112 "params": { 00:24:43.112 "impl_name": "posix", 00:24:43.112 "recv_buf_size": 2097152, 00:24:43.112 "send_buf_size": 2097152, 00:24:43.112 "enable_recv_pipe": true, 00:24:43.112 "enable_quickack": false, 00:24:43.112 "enable_placement_id": 0, 00:24:43.112 "enable_zerocopy_send_server": true, 00:24:43.112 "enable_zerocopy_send_client": false, 00:24:43.112 "zerocopy_threshold": 0, 00:24:43.112 "tls_version": 0, 00:24:43.112 "enable_ktls": false 00:24:43.112 } 00:24:43.112 } 00:24:43.112 ] 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "subsystem": "vmd", 00:24:43.112 "config": [] 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "subsystem": "accel", 00:24:43.112 "config": [ 00:24:43.112 { 00:24:43.112 "method": "accel_set_options", 00:24:43.112 "params": { 00:24:43.112 "small_cache_size": 128, 00:24:43.112 "large_cache_size": 16, 00:24:43.112 "task_count": 2048, 00:24:43.112 "sequence_count": 2048, 00:24:43.112 "buf_count": 2048 00:24:43.112 } 00:24:43.112 } 00:24:43.112 ] 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "subsystem": "bdev", 00:24:43.112 "config": [ 00:24:43.112 { 00:24:43.112 "method": "bdev_set_options", 00:24:43.112 "params": { 00:24:43.112 "bdev_io_pool_size": 65535, 00:24:43.112 "bdev_io_cache_size": 256, 00:24:43.112 "bdev_auto_examine": true, 00:24:43.112 "iobuf_small_cache_size": 128, 00:24:43.112 "iobuf_large_cache_size": 16 00:24:43.112 } 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "method": "bdev_raid_set_options", 00:24:43.112 "params": { 00:24:43.112 "process_window_size_kb": 1024, 00:24:43.112 "process_max_bandwidth_mb_sec": 0 00:24:43.112 } 00:24:43.112 }, 00:24:43.112 { 00:24:43.112 "method": "bdev_iscsi_set_options", 00:24:43.112 "params": { 00:24:43.112 "timeout_sec": 30 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "bdev_nvme_set_options", 00:24:43.113 "params": { 00:24:43.113 "action_on_timeout": "none", 00:24:43.113 "timeout_us": 0, 00:24:43.113 "timeout_admin_us": 0, 00:24:43.113 "keep_alive_timeout_ms": 10000, 00:24:43.113 "arbitration_burst": 0, 00:24:43.113 "low_priority_weight": 0, 00:24:43.113 "medium_priority_weight": 0, 00:24:43.113 "high_priority_weight": 0, 00:24:43.113 "nvme_adminq_poll_period_us": 10000, 00:24:43.113 "nvme_ioq_poll_period_us": 0, 00:24:43.113 "io_queue_requests": 0, 00:24:43.113 "delay_cmd_submit": true, 00:24:43.113 "transport_retry_count": 4, 00:24:43.113 "bdev_retry_count": 3, 00:24:43.113 "transport_ack_timeout": 0, 00:24:43.113 "ctrlr_loss_timeout_sec": 0, 00:24:43.113 "reconnect_delay_sec": 0, 00:24:43.113 "fast_io_fail_timeout_sec": 0, 00:24:43.113 "disable_auto_failback": false, 00:24:43.113 "generate_uuids": false, 00:24:43.113 "transport_tos": 0, 00:24:43.113 "nvme_error_stat": false, 00:24:43.113 "rdma_srq_size": 0, 00:24:43.113 "io_path_stat": false, 00:24:43.113 "allow_accel_sequence": false, 00:24:43.113 "rdma_max_cq_size": 0, 00:24:43.113 "rdma_cm_event_timeout_ms": 0, 00:24:43.113 "dhchap_digests": [ 00:24:43.113 "sha256", 00:24:43.113 "sha384", 00:24:43.113 "sha512" 00:24:43.113 ], 00:24:43.113 "dhchap_dhgroups": [ 00:24:43.113 "null", 00:24:43.113 "ffdhe2048", 00:24:43.113 "ffdhe3072", 00:24:43.113 "ffdhe4096", 00:24:43.113 "ffdhe6144", 00:24:43.113 "ffdhe8192" 00:24:43.113 ] 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "bdev_nvme_set_hotplug", 00:24:43.113 "params": { 00:24:43.113 "period_us": 100000, 00:24:43.113 "enable": false 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "bdev_malloc_create", 00:24:43.113 "params": { 00:24:43.113 "name": "malloc0", 00:24:43.113 "num_blocks": 8192, 00:24:43.113 "block_size": 4096, 00:24:43.113 "physical_block_size": 4096, 00:24:43.113 "uuid": "0da466d8-3269-4835-8770-96c23fb55014", 00:24:43.113 "optimal_io_boundary": 0, 00:24:43.113 "md_size": 0, 00:24:43.113 "dif_type": 0, 00:24:43.113 "dif_is_head_of_md": false, 00:24:43.113 "dif_pi_format": 0 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "bdev_wait_for_examine" 00:24:43.113 } 00:24:43.113 ] 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "subsystem": "nbd", 00:24:43.113 "config": [] 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "subsystem": "scheduler", 00:24:43.113 "config": [ 00:24:43.113 { 00:24:43.113 "method": "framework_set_scheduler", 00:24:43.113 "params": { 00:24:43.113 "name": "static" 00:24:43.113 } 00:24:43.113 } 00:24:43.113 ] 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "subsystem": "nvmf", 00:24:43.113 "config": [ 00:24:43.113 { 00:24:43.113 "method": "nvmf_set_config", 00:24:43.113 "params": { 00:24:43.113 "discovery_filter": "match_any", 00:24:43.113 "admin_cmd_passthru": { 00:24:43.113 "identify_ctrlr": false 00:24:43.113 }, 00:24:43.113 "dhchap_digests": [ 00:24:43.113 "sha256", 00:24:43.113 "sha384", 00:24:43.113 "sha512" 00:24:43.113 ], 00:24:43.113 "dhchap_dhgroups": [ 00:24:43.113 "null", 00:24:43.113 "ffdhe2048", 00:24:43.113 "ffdhe3072", 00:24:43.113 "ffdhe4096", 00:24:43.113 "ffdhe6144", 00:24:43.113 "ffdhe8192" 00:24:43.113 ] 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_set_max_subsystems", 00:24:43.113 "params": { 00:24:43.113 "max_subsystems": 1024 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_set_crdt", 00:24:43.113 "params": { 00:24:43.113 "crdt1": 0, 00:24:43.113 "crdt2": 0, 00:24:43.113 "crdt3": 0 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_create_transport", 00:24:43.113 "params": { 00:24:43.113 "trtype": "TCP", 00:24:43.113 "max_queue_depth": 128, 00:24:43.113 "max_io_qpairs_per_ctrlr": 127, 00:24:43.113 "in_capsule_data_size": 4096, 00:24:43.113 "max_io_size": 131072, 00:24:43.113 "io_unit_size": 131072, 00:24:43.113 "max_aq_depth": 128, 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:43.113 "num_shared_buffers": 511, 00:24:43.113 "buf_cache_size": 4294967295, 00:24:43.113 "dif_insert_or_strip": false, 00:24:43.113 "zcopy": false, 00:24:43.113 "c2h_success": false, 00:24:43.113 "sock_priority": 0, 00:24:43.113 "abort_timeout_sec": 1, 00:24:43.113 "ack_timeout": 0, 00:24:43.113 "data_wr_pool_size": 0 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_create_subsystem", 00:24:43.113 "params": { 00:24:43.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.113 "allow_any_host": false, 00:24:43.113 "serial_number": "00000000000000000000", 00:24:43.113 "model_number": "SPDK bdev Controller", 00:24:43.113 "max_namespaces": 32, 00:24:43.113 "min_cntlid": 1, 00:24:43.113 "max_cntlid": 65519, 00:24:43.113 "ana_reporting": false 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_subsystem_add_host", 00:24:43.113 "params": { 00:24:43.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.113 "host": "nqn.2016-06.io.spdk:host1", 00:24:43.113 "psk": "key0" 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_subsystem_add_ns", 00:24:43.113 "params": { 00:24:43.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.113 "namespace": { 00:24:43.113 "nsid": 1, 00:24:43.113 "bdev_name": "malloc0", 00:24:43.113 "nguid": "0DA466D832694835877096C23FB55014", 00:24:43.113 "uuid": "0da466d8-3269-4835-8770-96c23fb55014", 00:24:43.113 "no_auto_visible": false 00:24:43.113 } 00:24:43.113 } 00:24:43.113 }, 00:24:43.113 { 00:24:43.113 "method": "nvmf_subsystem_add_listener", 00:24:43.113 "params": { 00:24:43.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.113 "listen_address": { 00:24:43.113 "trtype": "TCP", 00:24:43.113 "adrfam": "IPv4", 00:24:43.113 "traddr": "10.0.0.2", 00:24:43.113 "trsvcid": "4420" 00:24:43.113 }, 00:24:43.113 "secure_channel": false, 00:24:43.113 "sock_impl": "ssl" 00:24:43.113 } 00:24:43.113 } 00:24:43.113 ] 00:24:43.113 } 00:24:43.113 ] 00:24:43.113 }' 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3907441 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3907441 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3907441 ']' 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:43.113 15:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.113 [2024-11-06 15:29:10.507667] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:43.113 [2024-11-06 15:29:10.507766] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.113 [2024-11-06 15:29:10.635792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.113 [2024-11-06 15:29:10.736725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.113 [2024-11-06 15:29:10.736770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.113 [2024-11-06 15:29:10.736780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.113 [2024-11-06 15:29:10.736806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.113 [2024-11-06 15:29:10.736815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.113 [2024-11-06 15:29:10.738454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.681 [2024-11-06 15:29:11.227314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.681 [2024-11-06 15:29:11.259372] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.681 [2024-11-06 15:29:11.259615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3907505 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3907505 /var/tmp/bdevperf.sock 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3907505 ']' 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.940 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:43.940 "subsystems": [ 00:24:43.940 { 00:24:43.940 "subsystem": "keyring", 00:24:43.940 "config": [ 00:24:43.940 { 00:24:43.940 "method": "keyring_file_add_key", 00:24:43.940 "params": { 00:24:43.940 "name": "key0", 00:24:43.940 "path": "/tmp/tmp.3NDmuKDv3S" 00:24:43.940 } 00:24:43.940 } 00:24:43.940 ] 00:24:43.940 }, 00:24:43.940 { 00:24:43.940 "subsystem": "iobuf", 00:24:43.940 "config": [ 00:24:43.940 { 00:24:43.940 "method": "iobuf_set_options", 00:24:43.940 "params": { 00:24:43.940 "small_pool_count": 8192, 00:24:43.940 "large_pool_count": 1024, 00:24:43.940 "small_bufsize": 8192, 00:24:43.940 "large_bufsize": 135168, 00:24:43.940 "enable_numa": false 00:24:43.940 } 00:24:43.940 } 00:24:43.940 ] 00:24:43.940 }, 00:24:43.940 { 00:24:43.940 "subsystem": "sock", 00:24:43.940 "config": [ 00:24:43.940 { 00:24:43.940 "method": "sock_set_default_impl", 00:24:43.940 "params": { 00:24:43.940 "impl_name": "posix" 00:24:43.940 } 00:24:43.940 }, 00:24:43.940 { 00:24:43.940 "method": "sock_impl_set_options", 00:24:43.940 "params": { 00:24:43.940 "impl_name": "ssl", 00:24:43.940 "recv_buf_size": 4096, 00:24:43.940 "send_buf_size": 4096, 00:24:43.940 "enable_recv_pipe": true, 00:24:43.940 "enable_quickack": false, 00:24:43.941 "enable_placement_id": 0, 00:24:43.941 "enable_zerocopy_send_server": true, 00:24:43.941 "enable_zerocopy_send_client": false, 00:24:43.941 "zerocopy_threshold": 0, 00:24:43.941 "tls_version": 0, 00:24:43.941 "enable_ktls": false 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "sock_impl_set_options", 00:24:43.941 "params": { 00:24:43.941 "impl_name": "posix", 00:24:43.941 "recv_buf_size": 2097152, 00:24:43.941 "send_buf_size": 2097152, 00:24:43.941 "enable_recv_pipe": true, 00:24:43.941 "enable_quickack": false, 00:24:43.941 "enable_placement_id": 0, 00:24:43.941 "enable_zerocopy_send_server": true, 00:24:43.941 "enable_zerocopy_send_client": false, 00:24:43.941 "zerocopy_threshold": 0, 00:24:43.941 "tls_version": 0, 00:24:43.941 "enable_ktls": false 00:24:43.941 } 00:24:43.941 } 00:24:43.941 ] 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "subsystem": "vmd", 00:24:43.941 "config": [] 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "subsystem": "accel", 00:24:43.941 "config": [ 00:24:43.941 { 00:24:43.941 "method": "accel_set_options", 00:24:43.941 "params": { 00:24:43.941 "small_cache_size": 128, 00:24:43.941 "large_cache_size": 16, 00:24:43.941 "task_count": 2048, 00:24:43.941 "sequence_count": 2048, 00:24:43.941 "buf_count": 2048 00:24:43.941 } 00:24:43.941 } 00:24:43.941 ] 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "subsystem": "bdev", 00:24:43.941 "config": [ 00:24:43.941 { 00:24:43.941 "method": "bdev_set_options", 00:24:43.941 "params": { 00:24:43.941 "bdev_io_pool_size": 65535, 00:24:43.941 "bdev_io_cache_size": 256, 00:24:43.941 "bdev_auto_examine": true, 00:24:43.941 "iobuf_small_cache_size": 128, 00:24:43.941 "iobuf_large_cache_size": 16 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_raid_set_options", 00:24:43.941 "params": { 00:24:43.941 "process_window_size_kb": 1024, 00:24:43.941 "process_max_bandwidth_mb_sec": 0 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_iscsi_set_options", 00:24:43.941 "params": { 00:24:43.941 "timeout_sec": 30 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_nvme_set_options", 00:24:43.941 "params": { 00:24:43.941 "action_on_timeout": "none", 00:24:43.941 "timeout_us": 0, 00:24:43.941 "timeout_admin_us": 0, 00:24:43.941 "keep_alive_timeout_ms": 10000, 00:24:43.941 "arbitration_burst": 0, 00:24:43.941 "low_priority_weight": 0, 00:24:43.941 "medium_priority_weight": 0, 00:24:43.941 "high_priority_weight": 0, 00:24:43.941 "nvme_adminq_poll_period_us": 10000, 00:24:43.941 "nvme_ioq_poll_period_us": 0, 00:24:43.941 "io_queue_requests": 512, 00:24:43.941 "delay_cmd_submit": true, 00:24:43.941 "transport_retry_count": 4, 00:24:43.941 "bdev_retry_count": 3, 00:24:43.941 "transport_ack_timeout": 0, 00:24:43.941 "ctrlr_loss_timeout_sec": 0, 00:24:43.941 "reconnect_delay_sec": 0, 00:24:43.941 "fast_io_fail_timeout_sec": 0, 00:24:43.941 "disable_auto_failback": false, 00:24:43.941 "generate_uuids": false, 00:24:43.941 "transport_tos": 0, 00:24:43.941 "nvme_error_stat": false, 00:24:43.941 "rdma_srq_size": 0, 00:24:43.941 "io_path_stat": false, 00:24:43.941 "allow_accel_sequence": false, 00:24:43.941 "rdma_max_cq_size": 0, 00:24:43.941 "rdma_cm_event_timeout_ms": 0, 00:24:43.941 "dhchap_digests": [ 00:24:43.941 "sha256", 00:24:43.941 "sha384", 00:24:43.941 "sha512" 00:24:43.941 ], 00:24:43.941 "dhchap_dhgroups": [ 00:24:43.941 "null", 00:24:43.941 "ffdhe2048", 00:24:43.941 "ffdhe3072", 00:24:43.941 "ffdhe4096", 00:24:43.941 "ffdhe6144", 00:24:43.941 "ffdhe8192" 00:24:43.941 ] 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_nvme_attach_controller", 00:24:43.941 "params": { 00:24:43.941 "name": "nvme0", 00:24:43.941 "trtype": "TCP", 00:24:43.941 "adrfam": "IPv4", 00:24:43.941 "traddr": "10.0.0.2", 00:24:43.941 "trsvcid": "4420", 00:24:43.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.941 "prchk_reftag": false, 00:24:43.941 "prchk_guard": false, 00:24:43.941 "ctrlr_loss_timeout_sec": 0, 00:24:43.941 "reconnect_delay_sec": 0, 00:24:43.941 "fast_io_fail_timeout_sec": 0, 00:24:43.941 "psk": "key0", 00:24:43.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.941 "hdgst": false, 00:24:43.941 "ddgst": false, 00:24:43.941 "multipath": "multipath" 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_nvme_set_hotplug", 00:24:43.941 "params": { 00:24:43.941 "period_us": 100000, 00:24:43.941 "enable": false 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_enable_histogram", 00:24:43.941 "params": { 00:24:43.941 "name": "nvme0n1", 00:24:43.941 "enable": true 00:24:43.941 } 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "method": "bdev_wait_for_examine" 00:24:43.941 } 00:24:43.941 ] 00:24:43.941 }, 00:24:43.941 { 00:24:43.941 "subsystem": "nbd", 00:24:43.941 "config": [] 00:24:43.941 } 00:24:43.941 ] 00:24:43.941 }' 00:24:43.941 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:43.941 15:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.941 [2024-11-06 15:29:11.428182] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:43.941 [2024-11-06 15:29:11.428270] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907505 ] 00:24:43.941 [2024-11-06 15:29:11.555140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.200 [2024-11-06 15:29:11.668922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.458 [2024-11-06 15:29:12.074158] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.717 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:44.717 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:44.717 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.717 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:44.976 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.976 15:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.976 Running I/O for 1 seconds... 00:24:46.170 4609.00 IOPS, 18.00 MiB/s 00:24:46.170 Latency(us) 00:24:46.170 [2024-11-06T14:29:13.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.170 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.170 Verification LBA range: start 0x0 length 0x2000 00:24:46.170 nvme0n1 : 1.02 4661.21 18.21 0.00 0.00 27247.69 6023.07 33704.23 00:24:46.170 [2024-11-06T14:29:13.808Z] =================================================================================================================== 00:24:46.170 [2024-11-06T14:29:13.808Z] Total : 4661.21 18.21 0.00 0.00 27247.69 6023.07 33704.23 00:24:46.170 { 00:24:46.170 "results": [ 00:24:46.170 { 00:24:46.170 "job": "nvme0n1", 00:24:46.170 "core_mask": "0x2", 00:24:46.170 "workload": "verify", 00:24:46.170 "status": "finished", 00:24:46.170 "verify_range": { 00:24:46.170 "start": 0, 00:24:46.170 "length": 8192 00:24:46.170 }, 00:24:46.170 "queue_depth": 128, 00:24:46.170 "io_size": 4096, 00:24:46.170 "runtime": 1.01626, 00:24:46.170 "iops": 4661.208745793399, 00:24:46.170 "mibps": 18.207846663255467, 00:24:46.170 "io_failed": 0, 00:24:46.170 "io_timeout": 0, 00:24:46.170 "avg_latency_us": 27247.69039355831, 00:24:46.170 "min_latency_us": 6023.070476190476, 00:24:46.170 "max_latency_us": 33704.22857142857 00:24:46.170 } 00:24:46.170 ], 00:24:46.170 "core_count": 1 00:24:46.170 } 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:46.170 nvmf_trace.0 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3907505 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3907505 ']' 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3907505 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3907505 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3907505' 00:24:46.170 killing process with pid 3907505 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3907505 00:24:46.170 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.170 00:24:46.170 Latency(us) 00:24:46.170 [2024-11-06T14:29:13.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.170 [2024-11-06T14:29:13.808Z] =================================================================================================================== 00:24:46.170 [2024-11-06T14:29:13.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.170 15:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3907505 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.107 rmmod nvme_tcp 00:24:47.107 rmmod nvme_fabrics 00:24:47.107 rmmod nvme_keyring 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3907441 ']' 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3907441 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3907441 ']' 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3907441 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3907441 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3907441' 00:24:47.107 killing process with pid 3907441 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3907441 00:24:47.107 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3907441 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.485 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aImVXwxuE2 /tmp/tmp.rFXc9wcz4O /tmp/tmp.3NDmuKDv3S 00:24:50.390 00:24:50.390 real 1m46.476s 00:24:50.390 user 2m45.974s 00:24:50.390 sys 0m30.517s 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.390 ************************************ 00:24:50.390 END TEST nvmf_tls 00:24:50.390 ************************************ 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:50.390 15:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:50.390 ************************************ 00:24:50.390 START TEST nvmf_fips 00:24:50.390 ************************************ 00:24:50.390 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:50.649 * Looking for test storage... 00:24:50.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.649 --rc genhtml_branch_coverage=1 00:24:50.649 --rc genhtml_function_coverage=1 00:24:50.649 --rc genhtml_legend=1 00:24:50.649 --rc geninfo_all_blocks=1 00:24:50.649 --rc geninfo_unexecuted_blocks=1 00:24:50.649 00:24:50.649 ' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.649 --rc genhtml_branch_coverage=1 00:24:50.649 --rc genhtml_function_coverage=1 00:24:50.649 --rc genhtml_legend=1 00:24:50.649 --rc geninfo_all_blocks=1 00:24:50.649 --rc geninfo_unexecuted_blocks=1 00:24:50.649 00:24:50.649 ' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.649 --rc genhtml_branch_coverage=1 00:24:50.649 --rc genhtml_function_coverage=1 00:24:50.649 --rc genhtml_legend=1 00:24:50.649 --rc geninfo_all_blocks=1 00:24:50.649 --rc geninfo_unexecuted_blocks=1 00:24:50.649 00:24:50.649 ' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:50.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.649 --rc genhtml_branch_coverage=1 00:24:50.649 --rc genhtml_function_coverage=1 00:24:50.649 --rc genhtml_legend=1 00:24:50.649 --rc geninfo_all_blocks=1 00:24:50.649 --rc geninfo_unexecuted_blocks=1 00:24:50.649 00:24:50.649 ' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:50.649 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:50.650 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:50.909 Error setting digest 00:24:50.909 40D2342B6B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:50.909 40D2342B6B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.909 15:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.479 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.480 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.480 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.480 15:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:24:57.480 00:24:57.480 --- 10.0.0.2 ping statistics --- 00:24:57.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.480 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:57.480 00:24:57.480 --- 10.0.0.1 ping statistics --- 00:24:57.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.480 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3911756 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3911756 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3911756 ']' 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.480 15:29:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.480 [2024-11-06 15:29:24.371909] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:57.480 [2024-11-06 15:29:24.372011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.480 [2024-11-06 15:29:24.503747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.480 [2024-11-06 15:29:24.604577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.480 [2024-11-06 15:29:24.604622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.480 [2024-11-06 15:29:24.604633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.480 [2024-11-06 15:29:24.604644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.480 [2024-11-06 15:29:24.604652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.480 [2024-11-06 15:29:24.606129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.739 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.739 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:57.739 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Nwi 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Nwi 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Nwi 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Nwi 00:24:57.740 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.740 [2024-11-06 15:29:25.353647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.740 [2024-11-06 15:29:25.369633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.740 [2024-11-06 15:29:25.369839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.999 malloc0 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3912002 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3912002 /var/tmp/bdevperf.sock 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3912002 ']' 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.999 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.999 [2024-11-06 15:29:25.566752] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:57.999 [2024-11-06 15:29:25.566849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912002 ] 00:24:58.257 [2024-11-06 15:29:25.688133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.257 [2024-11-06 15:29:25.798619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.825 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.825 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:24:58.825 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Nwi 00:24:59.084 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.084 [2024-11-06 15:29:26.704722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.343 TLSTESTn1 00:24:59.343 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.343 Running I/O for 10 seconds... 00:25:01.654 4573.00 IOPS, 17.86 MiB/s [2024-11-06T14:29:30.236Z] 4693.00 IOPS, 18.33 MiB/s [2024-11-06T14:29:31.170Z] 4707.67 IOPS, 18.39 MiB/s [2024-11-06T14:29:32.106Z] 4677.00 IOPS, 18.27 MiB/s [2024-11-06T14:29:33.041Z] 4700.00 IOPS, 18.36 MiB/s [2024-11-06T14:29:33.976Z] 4700.50 IOPS, 18.36 MiB/s [2024-11-06T14:29:35.351Z] 4697.14 IOPS, 18.35 MiB/s [2024-11-06T14:29:35.923Z] 4687.00 IOPS, 18.31 MiB/s [2024-11-06T14:29:37.298Z] 4680.89 IOPS, 18.28 MiB/s [2024-11-06T14:29:37.298Z] 4690.80 IOPS, 18.32 MiB/s 00:25:09.660 Latency(us) 00:25:09.660 [2024-11-06T14:29:37.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.660 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.660 Verification LBA range: start 0x0 length 0x2000 00:25:09.660 TLSTESTn1 : 10.02 4696.14 18.34 0.00 0.00 27215.28 5960.66 33704.23 00:25:09.660 [2024-11-06T14:29:37.298Z] =================================================================================================================== 00:25:09.660 [2024-11-06T14:29:37.298Z] Total : 4696.14 18.34 0.00 0.00 27215.28 5960.66 33704.23 00:25:09.660 { 00:25:09.660 "results": [ 00:25:09.660 { 00:25:09.660 "job": "TLSTESTn1", 00:25:09.660 "core_mask": "0x4", 00:25:09.660 "workload": "verify", 00:25:09.660 "status": "finished", 00:25:09.660 "verify_range": { 00:25:09.660 "start": 0, 00:25:09.660 "length": 8192 00:25:09.660 }, 00:25:09.660 "queue_depth": 128, 00:25:09.660 "io_size": 4096, 00:25:09.660 "runtime": 10.015663, 00:25:09.660 "iops": 4696.144428980887, 00:25:09.660 "mibps": 18.34431417570659, 00:25:09.660 "io_failed": 0, 00:25:09.660 "io_timeout": 0, 00:25:09.660 "avg_latency_us": 27215.28278285168, 00:25:09.660 "min_latency_us": 5960.655238095238, 00:25:09.660 "max_latency_us": 33704.22857142857 00:25:09.660 } 00:25:09.660 ], 00:25:09.660 "core_count": 1 00:25:09.660 } 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:25:09.660 15:29:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:09.660 nvmf_trace.0 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3912002 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3912002 ']' 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3912002 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3912002 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3912002' 00:25:09.660 killing process with pid 3912002 00:25:09.660 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3912002 00:25:09.660 Received shutdown signal, test time was about 10.000000 seconds 00:25:09.660 00:25:09.660 Latency(us) 00:25:09.660 [2024-11-06T14:29:37.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.661 [2024-11-06T14:29:37.299Z] =================================================================================================================== 00:25:09.661 [2024-11-06T14:29:37.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.661 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3912002 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.598 15:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.598 rmmod nvme_tcp 00:25:10.598 rmmod nvme_fabrics 00:25:10.598 rmmod nvme_keyring 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3911756 ']' 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3911756 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3911756 ']' 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3911756 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3911756 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3911756' 00:25:10.598 killing process with pid 3911756 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3911756 00:25:10.598 15:29:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3911756 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.976 15:29:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Nwi 00:25:13.880 00:25:13.880 real 0m23.378s 00:25:13.880 user 0m26.066s 00:25:13.880 sys 0m9.400s 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:13.880 ************************************ 00:25:13.880 END TEST nvmf_fips 00:25:13.880 ************************************ 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.880 ************************************ 00:25:13.880 START TEST nvmf_control_msg_list 00:25:13.880 ************************************ 00:25:13.880 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:14.140 * Looking for test storage... 00:25:14.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.140 --rc genhtml_branch_coverage=1 00:25:14.140 --rc genhtml_function_coverage=1 00:25:14.140 --rc genhtml_legend=1 00:25:14.140 --rc geninfo_all_blocks=1 00:25:14.140 --rc geninfo_unexecuted_blocks=1 00:25:14.140 00:25:14.140 ' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.140 --rc genhtml_branch_coverage=1 00:25:14.140 --rc genhtml_function_coverage=1 00:25:14.140 --rc genhtml_legend=1 00:25:14.140 --rc geninfo_all_blocks=1 00:25:14.140 --rc geninfo_unexecuted_blocks=1 00:25:14.140 00:25:14.140 ' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.140 --rc genhtml_branch_coverage=1 00:25:14.140 --rc genhtml_function_coverage=1 00:25:14.140 --rc genhtml_legend=1 00:25:14.140 --rc geninfo_all_blocks=1 00:25:14.140 --rc geninfo_unexecuted_blocks=1 00:25:14.140 00:25:14.140 ' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:14.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.140 --rc genhtml_branch_coverage=1 00:25:14.140 --rc genhtml_function_coverage=1 00:25:14.140 --rc genhtml_legend=1 00:25:14.140 --rc geninfo_all_blocks=1 00:25:14.140 --rc geninfo_unexecuted_blocks=1 00:25:14.140 00:25:14.140 ' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.140 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.141 15:29:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:20.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:20.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:20.715 Found net devices under 0000:86:00.0: cvl_0_0 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:20.715 Found net devices under 0000:86:00.1: cvl_0_1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.715 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:25:20.716 00:25:20.716 --- 10.0.0.2 ping statistics --- 00:25:20.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.716 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:25:20.716 00:25:20.716 --- 10.0.0.1 ping statistics --- 00:25:20.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.716 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3917694 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3917694 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3917694 ']' 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:20.716 15:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.716 [2024-11-06 15:29:47.723791] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:20.716 [2024-11-06 15:29:47.723882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.716 [2024-11-06 15:29:47.855091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.716 [2024-11-06 15:29:47.956966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.716 [2024-11-06 15:29:47.957012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.716 [2024-11-06 15:29:47.957022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.716 [2024-11-06 15:29:47.957032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.716 [2024-11-06 15:29:47.957040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.716 [2024-11-06 15:29:47.958480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.974 [2024-11-06 15:29:48.556633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:20.974 Malloc0 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.974 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.232 [2024-11-06 15:29:48.620146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3917847 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3917848 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3917849 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3917847 00:25:21.232 15:29:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:21.232 [2024-11-06 15:29:48.729833] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:21.232 [2024-11-06 15:29:48.730119] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:21.232 [2024-11-06 15:29:48.738981] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:22.607 Initializing NVMe Controllers 00:25:22.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:22.607 Initialization complete. Launching workers. 00:25:22.607 ======================================================== 00:25:22.607 Latency(us) 00:25:22.607 Device Information : IOPS MiB/s Average min max 00:25:22.607 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3993.99 15.60 249.88 148.18 950.15 00:25:22.607 ======================================================== 00:25:22.607 Total : 3993.99 15.60 249.88 148.18 950.15 00:25:22.607 00:25:22.607 Initializing NVMe Controllers 00:25:22.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:22.607 Initialization complete. Launching workers. 00:25:22.607 ======================================================== 00:25:22.607 Latency(us) 00:25:22.607 Device Information : IOPS MiB/s Average min max 00:25:22.607 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3932.00 15.36 253.85 144.58 949.51 00:25:22.607 ======================================================== 00:25:22.607 Total : 3932.00 15.36 253.85 144.58 949.51 00:25:22.607 00:25:22.607 Initializing NVMe Controllers 00:25:22.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:22.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:22.607 Initialization complete. Launching workers. 00:25:22.607 ======================================================== 00:25:22.607 Latency(us) 00:25:22.607 Device Information : IOPS MiB/s Average min max 00:25:22.607 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3927.00 15.34 254.18 140.44 825.53 00:25:22.607 ======================================================== 00:25:22.607 Total : 3927.00 15.34 254.18 140.44 825.53 00:25:22.607 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3917848 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3917849 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.607 rmmod nvme_tcp 00:25:22.607 rmmod nvme_fabrics 00:25:22.607 rmmod nvme_keyring 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3917694 ']' 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3917694 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3917694 ']' 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3917694 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:22.607 15:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3917694 00:25:22.607 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:22.607 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:22.607 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3917694' 00:25:22.607 killing process with pid 3917694 00:25:22.607 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3917694 00:25:22.607 15:29:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3917694 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.984 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.890 00:25:25.890 real 0m11.843s 00:25:25.890 user 0m8.415s 00:25:25.890 sys 0m5.671s 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.890 ************************************ 00:25:25.890 END TEST nvmf_control_msg_list 00:25:25.890 ************************************ 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:25.890 ************************************ 00:25:25.890 START TEST nvmf_wait_for_buf 00:25:25.890 ************************************ 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:25.890 * Looking for test storage... 00:25:25.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:25.890 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:26.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.151 --rc genhtml_branch_coverage=1 00:25:26.151 --rc genhtml_function_coverage=1 00:25:26.151 --rc genhtml_legend=1 00:25:26.151 --rc geninfo_all_blocks=1 00:25:26.151 --rc geninfo_unexecuted_blocks=1 00:25:26.151 00:25:26.151 ' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:26.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.151 --rc genhtml_branch_coverage=1 00:25:26.151 --rc genhtml_function_coverage=1 00:25:26.151 --rc genhtml_legend=1 00:25:26.151 --rc geninfo_all_blocks=1 00:25:26.151 --rc geninfo_unexecuted_blocks=1 00:25:26.151 00:25:26.151 ' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:26.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.151 --rc genhtml_branch_coverage=1 00:25:26.151 --rc genhtml_function_coverage=1 00:25:26.151 --rc genhtml_legend=1 00:25:26.151 --rc geninfo_all_blocks=1 00:25:26.151 --rc geninfo_unexecuted_blocks=1 00:25:26.151 00:25:26.151 ' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:26.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.151 --rc genhtml_branch_coverage=1 00:25:26.151 --rc genhtml_function_coverage=1 00:25:26.151 --rc genhtml_legend=1 00:25:26.151 --rc geninfo_all_blocks=1 00:25:26.151 --rc geninfo_unexecuted_blocks=1 00:25:26.151 00:25:26.151 ' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.151 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.152 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:32.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:32.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.723 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:32.724 Found net devices under 0000:86:00.0: cvl_0_0 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:32.724 Found net devices under 0000:86:00.1: cvl_0_1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:25:32.724 00:25:32.724 --- 10.0.0.2 ping statistics --- 00:25:32.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.724 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:25:32.724 00:25:32.724 --- 10.0.0.1 ping statistics --- 00:25:32.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.724 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3921833 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3921833 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3921833 ']' 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:32.724 15:29:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.724 [2024-11-06 15:29:59.622642] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:32.724 [2024-11-06 15:29:59.622730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.724 [2024-11-06 15:29:59.752237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.724 [2024-11-06 15:29:59.859056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.724 [2024-11-06 15:29:59.859101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.724 [2024-11-06 15:29:59.859111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.724 [2024-11-06 15:29:59.859122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.724 [2024-11-06 15:29:59.859129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.724 [2024-11-06 15:29:59.860455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.984 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 Malloc0 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 [2024-11-06 15:30:00.792680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.243 [2024-11-06 15:30:00.820900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.243 15:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.502 [2024-11-06 15:30:00.945167] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:34.972 Initializing NVMe Controllers 00:25:34.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:34.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:34.972 Initialization complete. Launching workers. 00:25:34.972 ======================================================== 00:25:34.972 Latency(us) 00:25:34.972 Device Information : IOPS MiB/s Average min max 00:25:34.972 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33291.83 26938.59 69595.44 00:25:34.972 ======================================================== 00:25:34.972 Total : 125.00 15.62 33291.83 26938.59 69595.44 00:25:34.972 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.972 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.231 rmmod nvme_tcp 00:25:35.231 rmmod nvme_fabrics 00:25:35.231 rmmod nvme_keyring 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3921833 ']' 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3921833 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3921833 ']' 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3921833 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3921833 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3921833' 00:25:35.231 killing process with pid 3921833 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3921833 00:25:35.231 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3921833 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.168 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.706 00:25:38.706 real 0m12.440s 00:25:38.706 user 0m6.073s 00:25:38.706 sys 0m5.009s 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.706 ************************************ 00:25:38.706 END TEST nvmf_wait_for_buf 00:25:38.706 ************************************ 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:38.706 ************************************ 00:25:38.706 START TEST nvmf_fuzz 00:25:38.706 ************************************ 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:38.706 * Looking for test storage... 00:25:38.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:38.706 15:30:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.706 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:38.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.707 --rc genhtml_branch_coverage=1 00:25:38.707 --rc genhtml_function_coverage=1 00:25:38.707 --rc genhtml_legend=1 00:25:38.707 --rc geninfo_all_blocks=1 00:25:38.707 --rc geninfo_unexecuted_blocks=1 00:25:38.707 00:25:38.707 ' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:38.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.707 --rc genhtml_branch_coverage=1 00:25:38.707 --rc genhtml_function_coverage=1 00:25:38.707 --rc genhtml_legend=1 00:25:38.707 --rc geninfo_all_blocks=1 00:25:38.707 --rc geninfo_unexecuted_blocks=1 00:25:38.707 00:25:38.707 ' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:38.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.707 --rc genhtml_branch_coverage=1 00:25:38.707 --rc genhtml_function_coverage=1 00:25:38.707 --rc genhtml_legend=1 00:25:38.707 --rc geninfo_all_blocks=1 00:25:38.707 --rc geninfo_unexecuted_blocks=1 00:25:38.707 00:25:38.707 ' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:38.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.707 --rc genhtml_branch_coverage=1 00:25:38.707 --rc genhtml_function_coverage=1 00:25:38.707 --rc genhtml_legend=1 00:25:38.707 --rc geninfo_all_blocks=1 00:25:38.707 --rc geninfo_unexecuted_blocks=1 00:25:38.707 00:25:38.707 ' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.707 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.280 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.280 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.281 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:25:45.281 00:25:45.281 --- 10.0.0.2 ping statistics --- 00:25:45.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.281 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:45.281 00:25:45.281 --- 10.0.0.1 ping statistics --- 00:25:45.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.281 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.281 15:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3926382 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3926382 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3926382 ']' 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.281 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.540 Malloc0 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:45.541 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:17.625 Fuzzing completed. Shutting down the fuzz application 00:26:17.625 00:26:17.625 Dumping successful admin opcodes: 00:26:17.625 8, 9, 10, 24, 00:26:17.625 Dumping successful io opcodes: 00:26:17.625 0, 9, 00:26:17.625 NS: 0x2000008efec0 I/O qp, Total commands completed: 674150, total successful commands: 3937, random_seed: 634904512 00:26:17.625 NS: 0x2000008efec0 admin qp, Total commands completed: 71656, total successful commands: 563, random_seed: 1797997312 00:26:17.626 15:30:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:17.885 Fuzzing completed. Shutting down the fuzz application 00:26:17.885 00:26:17.885 Dumping successful admin opcodes: 00:26:17.885 24, 00:26:17.885 Dumping successful io opcodes: 00:26:17.885 00:26:17.885 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1295669736 00:26:17.885 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1295768940 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.885 rmmod nvme_tcp 00:26:17.885 rmmod nvme_fabrics 00:26:17.885 rmmod nvme_keyring 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3926382 ']' 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3926382 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3926382 ']' 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 3926382 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3926382 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3926382' 00:26:17.885 killing process with pid 3926382 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 3926382 00:26:17.885 15:30:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 3926382 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.263 15:30:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.168 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.168 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:21.427 00:26:21.427 real 0m42.944s 00:26:21.427 user 0m56.639s 00:26:21.427 sys 0m16.442s 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:21.427 ************************************ 00:26:21.427 END TEST nvmf_fuzz 00:26:21.427 ************************************ 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:21.427 ************************************ 00:26:21.427 START TEST nvmf_multiconnection 00:26:21.427 ************************************ 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:21.427 * Looking for test storage... 00:26:21.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:26:21.427 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.427 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.687 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.687 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.687 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.687 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:21.687 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.688 --rc genhtml_branch_coverage=1 00:26:21.688 --rc genhtml_function_coverage=1 00:26:21.688 --rc genhtml_legend=1 00:26:21.688 --rc geninfo_all_blocks=1 00:26:21.688 --rc geninfo_unexecuted_blocks=1 00:26:21.688 00:26:21.688 ' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.688 --rc genhtml_branch_coverage=1 00:26:21.688 --rc genhtml_function_coverage=1 00:26:21.688 --rc genhtml_legend=1 00:26:21.688 --rc geninfo_all_blocks=1 00:26:21.688 --rc geninfo_unexecuted_blocks=1 00:26:21.688 00:26:21.688 ' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.688 --rc genhtml_branch_coverage=1 00:26:21.688 --rc genhtml_function_coverage=1 00:26:21.688 --rc genhtml_legend=1 00:26:21.688 --rc geninfo_all_blocks=1 00:26:21.688 --rc geninfo_unexecuted_blocks=1 00:26:21.688 00:26:21.688 ' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.688 --rc genhtml_branch_coverage=1 00:26:21.688 --rc genhtml_function_coverage=1 00:26:21.688 --rc genhtml_legend=1 00:26:21.688 --rc geninfo_all_blocks=1 00:26:21.688 --rc geninfo_unexecuted_blocks=1 00:26:21.688 00:26:21.688 ' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.688 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.689 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.689 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:28.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:28.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:28.264 Found net devices under 0000:86:00.0: cvl_0_0 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:28.264 Found net devices under 0000:86:00.1: cvl_0_1 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.264 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.265 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:26:28.265 00:26:28.265 --- 10.0.0.2 ping statistics --- 00:26:28.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.265 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:26:28.265 00:26:28.265 --- 10.0.0.1 ping statistics --- 00:26:28.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.265 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3935435 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3935435 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 3935435 ']' 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.265 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.265 [2024-11-06 15:30:55.157532] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:28.265 [2024-11-06 15:30:55.157626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.265 [2024-11-06 15:30:55.291631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.265 [2024-11-06 15:30:55.398434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.265 [2024-11-06 15:30:55.398481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.265 [2024-11-06 15:30:55.398491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.265 [2024-11-06 15:30:55.398501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.265 [2024-11-06 15:30:55.398510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.265 [2024-11-06 15:30:55.400957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.265 [2024-11-06 15:30:55.401048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.265 [2024-11-06 15:30:55.401127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.265 [2024-11-06 15:30:55.401150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.524 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.524 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:26:28.525 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.525 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.525 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 [2024-11-06 15:30:56.013153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 Malloc1 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.525 [2024-11-06 15:30:56.140939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.525 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 Malloc2 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 Malloc3 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.784 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.785 Malloc4 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.785 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 Malloc5 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 Malloc6 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.045 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 Malloc7 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 Malloc8 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 Malloc9 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.305 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 Malloc10 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 Malloc11 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.565 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:30.943 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:30.943 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:30.943 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.943 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:30.943 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.849 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:34.227 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:34.227 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:34.227 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.227 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:34.227 15:31:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.131 15:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:37.068 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:37.068 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:37.327 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.328 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:37.328 15:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.233 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:39.234 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.234 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:40.610 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:40.610 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:40.610 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.610 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:40.610 15:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.515 15:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:43.893 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:43.893 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:43.893 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.893 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:43.893 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.798 15:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:47.176 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:47.176 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:47.176 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.176 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:47.176 15:31:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:49.081 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:49.081 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:49.081 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:26:49.339 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:49.339 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.339 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:49.339 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.339 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:50.717 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:50.717 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:50.717 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.717 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:50.717 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.633 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:54.011 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:54.011 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:54.011 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:54.011 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:54.011 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.929 15:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:57.384 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:57.384 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:26:57.384 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.384 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:26:57.384 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.289 15:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:00.667 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:00.667 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:00.667 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.667 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:00.667 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.204 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:04.582 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:04.582 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:04.582 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.582 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:04.582 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:06.487 15:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:06.487 [global] 00:27:06.487 thread=1 00:27:06.487 invalidate=1 00:27:06.487 rw=read 00:27:06.487 time_based=1 00:27:06.487 runtime=10 00:27:06.487 ioengine=libaio 00:27:06.487 direct=1 00:27:06.487 bs=262144 00:27:06.487 iodepth=64 00:27:06.487 norandommap=1 00:27:06.487 numjobs=1 00:27:06.487 00:27:06.487 [job0] 00:27:06.487 filename=/dev/nvme0n1 00:27:06.487 [job1] 00:27:06.487 filename=/dev/nvme10n1 00:27:06.487 [job2] 00:27:06.487 filename=/dev/nvme1n1 00:27:06.487 [job3] 00:27:06.487 filename=/dev/nvme2n1 00:27:06.487 [job4] 00:27:06.487 filename=/dev/nvme3n1 00:27:06.487 [job5] 00:27:06.487 filename=/dev/nvme4n1 00:27:06.487 [job6] 00:27:06.487 filename=/dev/nvme5n1 00:27:06.487 [job7] 00:27:06.487 filename=/dev/nvme6n1 00:27:06.487 [job8] 00:27:06.487 filename=/dev/nvme7n1 00:27:06.487 [job9] 00:27:06.487 filename=/dev/nvme8n1 00:27:06.487 [job10] 00:27:06.487 filename=/dev/nvme9n1 00:27:06.487 Could not set queue depth (nvme0n1) 00:27:06.487 Could not set queue depth (nvme10n1) 00:27:06.487 Could not set queue depth (nvme1n1) 00:27:06.487 Could not set queue depth (nvme2n1) 00:27:06.487 Could not set queue depth (nvme3n1) 00:27:06.487 Could not set queue depth (nvme4n1) 00:27:06.487 Could not set queue depth (nvme5n1) 00:27:06.487 Could not set queue depth (nvme6n1) 00:27:06.487 Could not set queue depth (nvme7n1) 00:27:06.487 Could not set queue depth (nvme8n1) 00:27:06.487 Could not set queue depth (nvme9n1) 00:27:06.746 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:06.746 fio-3.35 00:27:06.746 Starting 11 threads 00:27:18.955 00:27:18.955 job0: (groupid=0, jobs=1): err= 0: pid=3942151: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=200, BW=50.2MiB/s (52.6MB/s)(510MiB/10163msec) 00:27:18.955 slat (usec): min=11, max=252986, avg=3864.70, stdev=18429.98 00:27:18.955 clat (usec): min=1237, max=897728, avg=314468.47, stdev=234335.03 00:27:18.955 lat (usec): min=1264, max=897767, avg=318333.16, stdev=236676.87 00:27:18.955 clat percentiles (usec): 00:27:18.955 | 1.00th=[ 1713], 5.00th=[ 2180], 10.00th=[ 16057], 20.00th=[ 99091], 00:27:18.955 | 30.00th=[154141], 40.00th=[227541], 50.00th=[261096], 60.00th=[341836], 00:27:18.955 | 70.00th=[446694], 80.00th=[557843], 90.00th=[658506], 95.00th=[734004], 00:27:18.955 | 99.00th=[851444], 99.50th=[884999], 99.90th=[901776], 99.95th=[901776], 00:27:18.955 | 99.99th=[901776] 00:27:18.955 bw ( KiB/s): min=16384, max=133632, per=6.27%, avg=50632.25, stdev=36977.85, samples=20 00:27:18.955 iops : min= 64, max= 522, avg=197.75, stdev=144.45, samples=20 00:27:18.955 lat (msec) : 2=4.41%, 4=4.31%, 10=0.73%, 20=7.25%, 50=1.18% 00:27:18.955 lat (msec) : 100=2.35%, 250=25.33%, 500=30.77%, 750=20.04%, 1000=3.63% 00:27:18.955 cpu : usr=0.07%, sys=0.68%, ctx=509, majf=0, minf=4098 00:27:18.955 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.955 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.955 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.955 job1: (groupid=0, jobs=1): err= 0: pid=3942169: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=364, BW=91.2MiB/s (95.6MB/s)(926MiB/10152msec) 00:27:18.955 slat (usec): min=8, max=218500, avg=1614.64, stdev=10182.70 00:27:18.955 clat (usec): min=1652, max=875366, avg=173662.81, stdev=187563.68 00:27:18.955 lat (usec): min=1681, max=875398, avg=175277.45, stdev=189624.52 00:27:18.955 clat percentiles (msec): 00:27:18.955 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 10], 20.00th=[ 18], 00:27:18.955 | 30.00th=[ 28], 40.00th=[ 37], 50.00th=[ 122], 60.00th=[ 180], 00:27:18.955 | 70.00th=[ 228], 80.00th=[ 288], 90.00th=[ 468], 95.00th=[ 592], 00:27:18.955 | 99.00th=[ 726], 99.50th=[ 802], 99.90th=[ 860], 99.95th=[ 877], 00:27:18.955 | 99.99th=[ 877] 00:27:18.955 bw ( KiB/s): min=28672, max=337408, per=11.54%, avg=93183.90, stdev=77123.65, samples=20 00:27:18.955 iops : min= 112, max= 1318, avg=363.95, stdev=301.31, samples=20 00:27:18.955 lat (msec) : 2=0.05%, 4=1.40%, 10=11.34%, 20=9.37%, 50=20.69% 00:27:18.955 lat (msec) : 100=4.05%, 250=26.47%, 500=18.39%, 750=7.26%, 1000=0.97% 00:27:18.955 cpu : usr=0.12%, sys=1.33%, ctx=1015, majf=0, minf=4097 00:27:18.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.955 issued rwts: total=3703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.955 job2: (groupid=0, jobs=1): err= 0: pid=3942180: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=277, BW=69.4MiB/s (72.8MB/s)(700MiB/10083msec) 00:27:18.955 slat (usec): min=10, max=457033, avg=2836.95, stdev=17081.18 00:27:18.955 clat (usec): min=1770, max=1043.1k, avg=227493.28, stdev=232453.98 00:27:18.955 lat (usec): min=1814, max=1152.2k, avg=230330.23, stdev=235098.64 00:27:18.955 clat percentiles (msec): 00:27:18.955 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 12], 00:27:18.955 | 30.00th=[ 41], 40.00th=[ 64], 50.00th=[ 169], 60.00th=[ 226], 00:27:18.955 | 70.00th=[ 351], 80.00th=[ 464], 90.00th=[ 558], 95.00th=[ 642], 00:27:18.955 | 99.00th=[ 1020], 99.50th=[ 1036], 99.90th=[ 1045], 99.95th=[ 1045], 00:27:18.955 | 99.99th=[ 1045] 00:27:18.955 bw ( KiB/s): min=26112, max=383488, per=8.68%, avg=70037.20, stdev=81587.76, samples=20 00:27:18.955 iops : min= 102, max= 1498, avg=273.55, stdev=318.71, samples=20 00:27:18.955 lat (msec) : 2=0.18%, 4=3.68%, 10=15.08%, 20=6.32%, 50=10.40% 00:27:18.955 lat (msec) : 100=8.90%, 250=18.72%, 500=20.40%, 750=14.76%, 1000=0.29% 00:27:18.955 lat (msec) : 2000=1.29% 00:27:18.955 cpu : usr=0.11%, sys=1.10%, ctx=631, majf=0, minf=4097 00:27:18.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:27:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.955 issued rwts: total=2799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.955 job3: (groupid=0, jobs=1): err= 0: pid=3942189: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=229, BW=57.5MiB/s (60.3MB/s)(580MiB/10084msec) 00:27:18.955 slat (usec): min=15, max=145682, avg=2974.41, stdev=12845.68 00:27:18.955 clat (msec): min=17, max=798, avg=275.05, stdev=186.48 00:27:18.955 lat (msec): min=17, max=849, avg=278.02, stdev=188.46 00:27:18.955 clat percentiles (msec): 00:27:18.955 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 81], 20.00th=[ 100], 00:27:18.955 | 30.00th=[ 167], 40.00th=[ 197], 50.00th=[ 220], 60.00th=[ 251], 00:27:18.955 | 70.00th=[ 347], 80.00th=[ 451], 90.00th=[ 584], 95.00th=[ 659], 00:27:18.955 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 802], 99.95th=[ 802], 00:27:18.955 | 99.99th=[ 802] 00:27:18.955 bw ( KiB/s): min=22528, max=132608, per=7.15%, avg=57753.30, stdev=34694.86, samples=20 00:27:18.955 iops : min= 88, max= 518, avg=225.55, stdev=135.51, samples=20 00:27:18.955 lat (msec) : 20=0.17%, 50=5.69%, 100=14.53%, 250=39.50%, 500=24.15% 00:27:18.955 lat (msec) : 750=15.70%, 1000=0.26% 00:27:18.955 cpu : usr=0.06%, sys=0.95%, ctx=402, majf=0, minf=4097 00:27:18.955 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:27:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.955 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.955 job4: (groupid=0, jobs=1): err= 0: pid=3942195: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=220, BW=55.0MiB/s (57.7MB/s)(559MiB/10163msec) 00:27:18.955 slat (usec): min=15, max=235379, avg=3320.96, stdev=17373.37 00:27:18.955 clat (msec): min=11, max=927, avg=287.25, stdev=230.84 00:27:18.955 lat (msec): min=16, max=927, avg=290.57, stdev=233.07 00:27:18.955 clat percentiles (msec): 00:27:18.955 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 50], 20.00th=[ 87], 00:27:18.955 | 30.00th=[ 102], 40.00th=[ 130], 50.00th=[ 203], 60.00th=[ 271], 00:27:18.955 | 70.00th=[ 430], 80.00th=[ 567], 90.00th=[ 625], 95.00th=[ 684], 00:27:18.955 | 99.00th=[ 810], 99.50th=[ 877], 99.90th=[ 927], 99.95th=[ 927], 00:27:18.955 | 99.99th=[ 927] 00:27:18.955 bw ( KiB/s): min=15872, max=192000, per=6.89%, avg=55594.50, stdev=52556.77, samples=20 00:27:18.955 iops : min= 62, max= 750, avg=217.15, stdev=205.29, samples=20 00:27:18.955 lat (msec) : 20=0.36%, 50=10.11%, 100=19.01%, 250=29.87%, 500=15.74% 00:27:18.955 lat (msec) : 750=22.85%, 1000=2.06% 00:27:18.955 cpu : usr=0.07%, sys=0.76%, ctx=366, majf=0, minf=4097 00:27:18.955 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:27:18.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.955 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.955 job5: (groupid=0, jobs=1): err= 0: pid=3942216: Wed Nov 6 15:31:44 2024 00:27:18.955 read: IOPS=367, BW=91.8MiB/s (96.2MB/s)(932MiB/10160msec) 00:27:18.955 slat (usec): min=9, max=274749, avg=1726.86, stdev=10008.52 00:27:18.955 clat (usec): min=1336, max=754400, avg=172462.28, stdev=154130.93 00:27:18.955 lat (usec): min=1878, max=754429, avg=174189.14, stdev=155529.68 00:27:18.955 clat percentiles (msec): 00:27:18.955 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 32], 20.00th=[ 39], 00:27:18.955 | 30.00th=[ 59], 40.00th=[ 82], 50.00th=[ 124], 60.00th=[ 180], 00:27:18.955 | 70.00th=[ 230], 80.00th=[ 279], 90.00th=[ 422], 95.00th=[ 518], 00:27:18.955 | 99.00th=[ 609], 99.50th=[ 642], 99.90th=[ 667], 99.95th=[ 718], 00:27:18.955 | 99.99th=[ 751] 00:27:18.956 bw ( KiB/s): min=27648, max=318976, per=11.62%, avg=93816.80, stdev=70087.05, samples=20 00:27:18.956 iops : min= 108, max= 1246, avg=366.45, stdev=273.78, samples=20 00:27:18.956 lat (msec) : 2=0.11%, 4=1.10%, 10=2.92%, 20=2.09%, 50=19.52% 00:27:18.956 lat (msec) : 100=17.99%, 250=30.79%, 500=19.12%, 750=6.33%, 1000=0.03% 00:27:18.956 cpu : usr=0.14%, sys=1.17%, ctx=992, majf=0, minf=4097 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=3729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 job6: (groupid=0, jobs=1): err= 0: pid=3942226: Wed Nov 6 15:31:44 2024 00:27:18.956 read: IOPS=488, BW=122MiB/s (128MB/s)(1231MiB/10089msec) 00:27:18.956 slat (usec): min=11, max=444004, avg=1475.70, stdev=10523.90 00:27:18.956 clat (usec): min=1279, max=917150, avg=129535.51, stdev=133269.16 00:27:18.956 lat (usec): min=1319, max=917182, avg=131011.21, stdev=134909.39 00:27:18.956 clat percentiles (msec): 00:27:18.956 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 32], 20.00th=[ 40], 00:27:18.956 | 30.00th=[ 53], 40.00th=[ 69], 50.00th=[ 83], 60.00th=[ 97], 00:27:18.956 | 70.00th=[ 127], 80.00th=[ 192], 90.00th=[ 317], 95.00th=[ 443], 00:27:18.956 | 99.00th=[ 634], 99.50th=[ 718], 99.90th=[ 785], 99.95th=[ 860], 00:27:18.956 | 99.99th=[ 919] 00:27:18.956 bw ( KiB/s): min=10240, max=302080, per=15.41%, avg=124412.40, stdev=80940.56, samples=20 00:27:18.956 iops : min= 40, max= 1180, avg=485.95, stdev=316.22, samples=20 00:27:18.956 lat (msec) : 2=0.18%, 4=0.57%, 10=1.83%, 20=1.79%, 50=22.89% 00:27:18.956 lat (msec) : 100=33.96%, 250=24.96%, 500=11.05%, 750=2.58%, 1000=0.20% 00:27:18.956 cpu : usr=0.12%, sys=1.76%, ctx=1325, majf=0, minf=4097 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 job7: (groupid=0, jobs=1): err= 0: pid=3942239: Wed Nov 6 15:31:44 2024 00:27:18.956 read: IOPS=373, BW=93.3MiB/s (97.8MB/s)(951MiB/10199msec) 00:27:18.956 slat (usec): min=9, max=353295, avg=1798.50, stdev=12569.84 00:27:18.956 clat (usec): min=961, max=1013.9k, avg=169576.34, stdev=201451.02 00:27:18.956 lat (usec): min=1010, max=1014.0k, avg=171374.84, stdev=203299.96 00:27:18.956 clat percentiles (msec): 00:27:18.956 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 32], 00:27:18.956 | 30.00th=[ 36], 40.00th=[ 42], 50.00th=[ 80], 60.00th=[ 124], 00:27:18.956 | 70.00th=[ 201], 80.00th=[ 262], 90.00th=[ 460], 95.00th=[ 693], 00:27:18.956 | 99.00th=[ 844], 99.50th=[ 927], 99.90th=[ 1011], 99.95th=[ 1011], 00:27:18.956 | 99.99th=[ 1011] 00:27:18.956 bw ( KiB/s): min=17920, max=453120, per=11.86%, avg=95765.05, stdev=115495.37, samples=20 00:27:18.956 iops : min= 70, max= 1770, avg=374.05, stdev=451.17, samples=20 00:27:18.956 lat (usec) : 1000=0.03% 00:27:18.956 lat (msec) : 2=0.29%, 10=0.21%, 20=0.79%, 50=41.18%, 100=13.35% 00:27:18.956 lat (msec) : 250=22.47%, 500=13.56%, 750=4.97%, 1000=2.94%, 2000=0.21% 00:27:18.956 cpu : usr=0.09%, sys=1.42%, ctx=561, majf=0, minf=4097 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=3805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 job8: (groupid=0, jobs=1): err= 0: pid=3942265: Wed Nov 6 15:31:44 2024 00:27:18.956 read: IOPS=180, BW=45.2MiB/s (47.4MB/s)(460MiB/10168msec) 00:27:18.956 slat (usec): min=15, max=255500, avg=3362.66, stdev=17523.01 00:27:18.956 clat (msec): min=4, max=890, avg=349.97, stdev=219.71 00:27:18.956 lat (msec): min=4, max=900, avg=353.34, stdev=221.63 00:27:18.956 clat percentiles (msec): 00:27:18.956 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 48], 20.00th=[ 130], 00:27:18.956 | 30.00th=[ 220], 40.00th=[ 275], 50.00th=[ 338], 60.00th=[ 409], 00:27:18.956 | 70.00th=[ 472], 80.00th=[ 558], 90.00th=[ 659], 95.00th=[ 726], 00:27:18.956 | 99.00th=[ 818], 99.50th=[ 860], 99.90th=[ 894], 99.95th=[ 894], 00:27:18.956 | 99.99th=[ 894] 00:27:18.956 bw ( KiB/s): min=19456, max=129536, per=5.63%, avg=45462.20, stdev=29740.86, samples=20 00:27:18.956 iops : min= 76, max= 506, avg=177.55, stdev=116.19, samples=20 00:27:18.956 lat (msec) : 10=2.77%, 20=2.39%, 50=5.43%, 100=5.92%, 250=19.08% 00:27:18.956 lat (msec) : 500=37.66%, 750=23.53%, 1000=3.21% 00:27:18.956 cpu : usr=0.06%, sys=0.69%, ctx=438, majf=0, minf=4097 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 job9: (groupid=0, jobs=1): err= 0: pid=3942276: Wed Nov 6 15:31:44 2024 00:27:18.956 read: IOPS=242, BW=60.7MiB/s (63.6MB/s)(616MiB/10158msec) 00:27:18.956 slat (usec): min=16, max=567486, avg=2777.75, stdev=19232.30 00:27:18.956 clat (msec): min=3, max=1019, avg=260.63, stdev=240.46 00:27:18.956 lat (msec): min=3, max=1019, avg=263.41, stdev=242.85 00:27:18.956 clat percentiles (msec): 00:27:18.956 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 41], 00:27:18.956 | 30.00th=[ 50], 40.00th=[ 109], 50.00th=[ 165], 60.00th=[ 292], 00:27:18.956 | 70.00th=[ 384], 80.00th=[ 518], 90.00th=[ 625], 95.00th=[ 667], 00:27:18.956 | 99.00th=[ 944], 99.50th=[ 961], 99.90th=[ 961], 99.95th=[ 1020], 00:27:18.956 | 99.99th=[ 1020] 00:27:18.956 bw ( KiB/s): min=13312, max=211968, per=7.61%, avg=61475.95, stdev=53411.27, samples=20 00:27:18.956 iops : min= 52, max= 828, avg=240.10, stdev=208.57, samples=20 00:27:18.956 lat (msec) : 4=0.08%, 10=0.20%, 20=9.98%, 50=19.84%, 100=8.76% 00:27:18.956 lat (msec) : 250=17.24%, 500=22.76%, 750=18.82%, 1000=2.23%, 2000=0.08% 00:27:18.956 cpu : usr=0.13%, sys=0.85%, ctx=554, majf=0, minf=3722 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.4% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 job10: (groupid=0, jobs=1): err= 0: pid=3942284: Wed Nov 6 15:31:44 2024 00:27:18.956 read: IOPS=228, BW=57.1MiB/s (59.8MB/s)(576MiB/10087msec) 00:27:18.956 slat (usec): min=10, max=295249, avg=2724.48, stdev=14623.23 00:27:18.956 clat (usec): min=1813, max=768744, avg=277432.90, stdev=183983.93 00:27:18.956 lat (usec): min=1840, max=768782, avg=280157.37, stdev=185336.58 00:27:18.956 clat percentiles (msec): 00:27:18.956 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 51], 20.00th=[ 106], 00:27:18.956 | 30.00th=[ 161], 40.00th=[ 205], 50.00th=[ 243], 60.00th=[ 275], 00:27:18.956 | 70.00th=[ 359], 80.00th=[ 464], 90.00th=[ 550], 95.00th=[ 625], 00:27:18.956 | 99.00th=[ 693], 99.50th=[ 718], 99.90th=[ 768], 99.95th=[ 768], 00:27:18.956 | 99.99th=[ 768] 00:27:18.956 bw ( KiB/s): min=25088, max=150528, per=7.10%, avg=57288.95, stdev=32280.35, samples=20 00:27:18.956 iops : min= 98, max= 588, avg=223.75, stdev=126.12, samples=20 00:27:18.956 lat (msec) : 2=0.09%, 4=0.13%, 10=1.56%, 20=0.09%, 50=7.99% 00:27:18.956 lat (msec) : 100=9.56%, 250=32.84%, 500=32.45%, 750=15.16%, 1000=0.13% 00:27:18.956 cpu : usr=0.08%, sys=0.76%, ctx=444, majf=0, minf=4097 00:27:18.956 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:27:18.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:18.956 issued rwts: total=2302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.956 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:18.956 00:27:18.956 Run status group 0 (all jobs): 00:27:18.956 READ: bw=788MiB/s (827MB/s), 45.2MiB/s-122MiB/s (47.4MB/s-128MB/s), io=8041MiB (8431MB), run=10083-10199msec 00:27:18.956 00:27:18.956 Disk stats (read/write): 00:27:18.956 nvme0n1: ios=3931/0, merge=0/0, ticks=1198040/0, in_queue=1198040, util=97.12% 00:27:18.956 nvme10n1: ios=7255/0, merge=0/0, ticks=1204105/0, in_queue=1204105, util=97.37% 00:27:18.956 nvme1n1: ios=5441/0, merge=0/0, ticks=1234284/0, in_queue=1234284, util=97.67% 00:27:18.956 nvme2n1: ios=4464/0, merge=0/0, ticks=1228539/0, in_queue=1228539, util=97.79% 00:27:18.956 nvme3n1: ios=4344/0, merge=0/0, ticks=1200263/0, in_queue=1200263, util=97.89% 00:27:18.956 nvme4n1: ios=7289/0, merge=0/0, ticks=1224395/0, in_queue=1224395, util=98.25% 00:27:18.956 nvme5n1: ios=9660/0, merge=0/0, ticks=1235134/0, in_queue=1235134, util=98.39% 00:27:18.956 nvme6n1: ios=7609/0, merge=0/0, ticks=1275644/0, in_queue=1275644, util=98.54% 00:27:18.956 nvme7n1: ios=3546/0, merge=0/0, ticks=1209546/0, in_queue=1209546, util=98.92% 00:27:18.956 nvme8n1: ios=4803/0, merge=0/0, ticks=1197830/0, in_queue=1197830, util=99.13% 00:27:18.956 nvme9n1: ios=4443/0, merge=0/0, ticks=1239148/0, in_queue=1239148, util=99.24% 00:27:18.956 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:18.956 [global] 00:27:18.956 thread=1 00:27:18.956 invalidate=1 00:27:18.956 rw=randwrite 00:27:18.956 time_based=1 00:27:18.956 runtime=10 00:27:18.956 ioengine=libaio 00:27:18.956 direct=1 00:27:18.956 bs=262144 00:27:18.956 iodepth=64 00:27:18.956 norandommap=1 00:27:18.956 numjobs=1 00:27:18.956 00:27:18.956 [job0] 00:27:18.956 filename=/dev/nvme0n1 00:27:18.956 [job1] 00:27:18.956 filename=/dev/nvme10n1 00:27:18.956 [job2] 00:27:18.956 filename=/dev/nvme1n1 00:27:18.956 [job3] 00:27:18.956 filename=/dev/nvme2n1 00:27:18.956 [job4] 00:27:18.956 filename=/dev/nvme3n1 00:27:18.956 [job5] 00:27:18.956 filename=/dev/nvme4n1 00:27:18.956 [job6] 00:27:18.956 filename=/dev/nvme5n1 00:27:18.956 [job7] 00:27:18.956 filename=/dev/nvme6n1 00:27:18.956 [job8] 00:27:18.957 filename=/dev/nvme7n1 00:27:18.957 [job9] 00:27:18.957 filename=/dev/nvme8n1 00:27:18.957 [job10] 00:27:18.957 filename=/dev/nvme9n1 00:27:18.957 Could not set queue depth (nvme0n1) 00:27:18.957 Could not set queue depth (nvme10n1) 00:27:18.957 Could not set queue depth (nvme1n1) 00:27:18.957 Could not set queue depth (nvme2n1) 00:27:18.957 Could not set queue depth (nvme3n1) 00:27:18.957 Could not set queue depth (nvme4n1) 00:27:18.957 Could not set queue depth (nvme5n1) 00:27:18.957 Could not set queue depth (nvme6n1) 00:27:18.957 Could not set queue depth (nvme7n1) 00:27:18.957 Could not set queue depth (nvme8n1) 00:27:18.957 Could not set queue depth (nvme9n1) 00:27:18.957 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:18.957 fio-3.35 00:27:18.957 Starting 11 threads 00:27:28.933 00:27:28.933 job0: (groupid=0, jobs=1): err= 0: pid=3943275: Wed Nov 6 15:31:55 2024 00:27:28.933 write: IOPS=267, BW=66.8MiB/s (70.1MB/s)(680MiB/10178msec); 0 zone resets 00:27:28.933 slat (usec): min=20, max=114710, avg=2918.11, stdev=7441.11 00:27:28.933 clat (usec): min=879, max=573233, avg=236444.65, stdev=129266.95 00:27:28.933 lat (usec): min=1030, max=573297, avg=239362.76, stdev=130908.35 00:27:28.933 clat percentiles (msec): 00:27:28.933 | 1.00th=[ 5], 5.00th=[ 44], 10.00th=[ 84], 20.00th=[ 103], 00:27:28.933 | 30.00th=[ 142], 40.00th=[ 192], 50.00th=[ 241], 60.00th=[ 275], 00:27:28.933 | 70.00th=[ 321], 80.00th=[ 351], 90.00th=[ 409], 95.00th=[ 464], 00:27:28.933 | 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:27:28.933 | 99.99th=[ 575] 00:27:28.933 bw ( KiB/s): min=28160, max=147751, per=7.34%, avg=68008.35, stdev=35027.44, samples=20 00:27:28.933 iops : min= 110, max= 577, avg=265.65, stdev=136.81, samples=20 00:27:28.933 lat (usec) : 1000=0.04% 00:27:28.933 lat (msec) : 2=0.40%, 4=0.37%, 10=1.54%, 20=0.66%, 50=2.72% 00:27:28.933 lat (msec) : 100=13.64%, 250=32.06%, 500=47.24%, 750=1.32% 00:27:28.933 cpu : usr=0.47%, sys=1.11%, ctx=1210, majf=0, minf=1 00:27:28.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.933 issued rwts: total=0,2720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.933 job1: (groupid=0, jobs=1): err= 0: pid=3943303: Wed Nov 6 15:31:55 2024 00:27:28.933 write: IOPS=263, BW=65.8MiB/s (69.0MB/s)(670MiB/10175msec); 0 zone resets 00:27:28.933 slat (usec): min=26, max=136826, avg=3018.51, stdev=7457.50 00:27:28.933 clat (usec): min=1383, max=463539, avg=239831.61, stdev=111699.99 00:27:28.933 lat (usec): min=1446, max=463579, avg=242850.13, stdev=112978.93 00:27:28.933 clat percentiles (msec): 00:27:28.933 | 1.00th=[ 6], 5.00th=[ 38], 10.00th=[ 91], 20.00th=[ 136], 00:27:28.933 | 30.00th=[ 167], 40.00th=[ 207], 50.00th=[ 249], 60.00th=[ 284], 00:27:28.933 | 70.00th=[ 317], 80.00th=[ 347], 90.00th=[ 376], 95.00th=[ 414], 00:27:28.933 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 460], 99.95th=[ 464], 00:27:28.933 | 99.99th=[ 464] 00:27:28.933 bw ( KiB/s): min=39424, max=112128, per=7.23%, avg=67000.95, stdev=22825.69, samples=20 00:27:28.933 iops : min= 154, max= 438, avg=261.70, stdev=89.17, samples=20 00:27:28.933 lat (msec) : 2=0.19%, 4=0.56%, 10=1.49%, 20=1.83%, 50=1.68% 00:27:28.933 lat (msec) : 100=5.34%, 250=39.25%, 500=49.66% 00:27:28.933 cpu : usr=0.67%, sys=0.81%, ctx=1222, majf=0, minf=1 00:27:28.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.933 issued rwts: total=0,2680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.933 job2: (groupid=0, jobs=1): err= 0: pid=3943323: Wed Nov 6 15:31:55 2024 00:27:28.933 write: IOPS=281, BW=70.4MiB/s (73.8MB/s)(717MiB/10187msec); 0 zone resets 00:27:28.933 slat (usec): min=23, max=225742, avg=2910.11, stdev=8176.82 00:27:28.933 clat (msec): min=7, max=500, avg=224.23, stdev=126.81 00:27:28.933 lat (msec): min=7, max=500, avg=227.14, stdev=128.59 00:27:28.933 clat percentiles (msec): 00:27:28.933 | 1.00th=[ 20], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 93], 00:27:28.933 | 30.00th=[ 120], 40.00th=[ 182], 50.00th=[ 230], 60.00th=[ 259], 00:27:28.933 | 70.00th=[ 305], 80.00th=[ 347], 90.00th=[ 405], 95.00th=[ 443], 00:27:28.933 | 99.00th=[ 489], 99.50th=[ 493], 99.90th=[ 502], 99.95th=[ 502], 00:27:28.933 | 99.99th=[ 502] 00:27:28.933 bw ( KiB/s): min=30781, max=184832, per=7.75%, avg=71785.45, stdev=41248.68, samples=20 00:27:28.933 iops : min= 120, max= 722, avg=280.40, stdev=161.14, samples=20 00:27:28.933 lat (msec) : 10=0.07%, 20=0.98%, 50=4.15%, 100=20.12%, 250=31.10% 00:27:28.933 lat (msec) : 500=43.55%, 750=0.03% 00:27:28.933 cpu : usr=0.61%, sys=1.00%, ctx=1214, majf=0, minf=1 00:27:28.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,2868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job3: (groupid=0, jobs=1): err= 0: pid=3943334: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=428, BW=107MiB/s (112MB/s)(1092MiB/10183msec); 0 zone resets 00:27:28.934 slat (usec): min=22, max=82075, avg=1563.98, stdev=4858.22 00:27:28.934 clat (usec): min=888, max=514557, avg=147560.01, stdev=107997.52 00:27:28.934 lat (usec): min=951, max=519007, avg=149123.99, stdev=109203.77 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 37], 00:27:28.934 | 30.00th=[ 72], 40.00th=[ 113], 50.00th=[ 140], 60.00th=[ 161], 00:27:28.934 | 70.00th=[ 184], 80.00th=[ 232], 90.00th=[ 296], 95.00th=[ 363], 00:27:28.934 | 99.00th=[ 464], 99.50th=[ 477], 99.90th=[ 502], 99.95th=[ 506], 00:27:28.934 | 99.99th=[ 514] 00:27:28.934 bw ( KiB/s): min=44544, max=267264, per=11.90%, avg=110214.50, stdev=57782.26, samples=20 00:27:28.934 iops : min= 174, max= 1044, avg=430.50, stdev=225.73, samples=20 00:27:28.934 lat (usec) : 1000=0.02% 00:27:28.934 lat (msec) : 2=0.39%, 4=0.57%, 10=2.61%, 20=5.20%, 50=14.79% 00:27:28.934 lat (msec) : 100=13.32%, 250=46.34%, 500=16.64%, 750=0.11% 00:27:28.934 cpu : usr=0.90%, sys=1.28%, ctx=2658, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,4368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job4: (groupid=0, jobs=1): err= 0: pid=3943340: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=305, BW=76.4MiB/s (80.1MB/s)(777MiB/10177msec); 0 zone resets 00:27:28.934 slat (usec): min=23, max=181246, avg=2172.80, stdev=6919.88 00:27:28.934 clat (usec): min=781, max=535921, avg=207214.78, stdev=128208.07 00:27:28.934 lat (usec): min=836, max=535971, avg=209387.57, stdev=129735.79 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 40], 20.00th=[ 93], 00:27:28.934 | 30.00th=[ 134], 40.00th=[ 176], 50.00th=[ 197], 60.00th=[ 222], 00:27:28.934 | 70.00th=[ 259], 80.00th=[ 300], 90.00th=[ 418], 95.00th=[ 451], 00:27:28.934 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 531], 99.95th=[ 535], 00:27:28.934 | 99.99th=[ 535] 00:27:28.934 bw ( KiB/s): min=34816, max=209408, per=8.42%, avg=77981.45, stdev=42112.83, samples=20 00:27:28.934 iops : min= 136, max= 818, avg=304.60, stdev=164.52, samples=20 00:27:28.934 lat (usec) : 1000=0.10% 00:27:28.934 lat (msec) : 2=0.61%, 4=0.84%, 10=2.22%, 20=3.18%, 50=6.14% 00:27:28.934 lat (msec) : 100=7.69%, 250=46.67%, 500=31.94%, 750=0.61% 00:27:28.934 cpu : usr=0.69%, sys=1.02%, ctx=1851, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,3109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job5: (groupid=0, jobs=1): err= 0: pid=3943352: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=346, BW=86.7MiB/s (90.9MB/s)(880MiB/10148msec); 0 zone resets 00:27:28.934 slat (usec): min=17, max=29864, avg=2508.98, stdev=5574.06 00:27:28.934 clat (msec): min=3, max=482, avg=181.91, stdev=100.90 00:27:28.934 lat (msec): min=3, max=487, avg=184.42, stdev=102.21 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 40], 5.00th=[ 56], 10.00th=[ 65], 20.00th=[ 97], 00:27:28.934 | 30.00th=[ 114], 40.00th=[ 131], 50.00th=[ 157], 60.00th=[ 192], 00:27:28.934 | 70.00th=[ 222], 80.00th=[ 284], 90.00th=[ 338], 95.00th=[ 359], 00:27:28.934 | 99.00th=[ 460], 99.50th=[ 468], 99.90th=[ 481], 99.95th=[ 481], 00:27:28.934 | 99.99th=[ 481] 00:27:28.934 bw ( KiB/s): min=43008, max=201216, per=9.55%, avg=88492.10, stdev=43493.47, samples=20 00:27:28.934 iops : min= 168, max= 786, avg=345.65, stdev=169.84, samples=20 00:27:28.934 lat (msec) : 4=0.03%, 10=0.11%, 20=0.26%, 50=1.34%, 100=20.51% 00:27:28.934 lat (msec) : 250=54.03%, 500=23.72% 00:27:28.934 cpu : usr=0.78%, sys=1.21%, ctx=1151, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,3520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job6: (groupid=0, jobs=1): err= 0: pid=3943354: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=357, BW=89.3MiB/s (93.6MB/s)(910MiB/10184msec); 0 zone resets 00:27:28.934 slat (usec): min=25, max=62000, avg=1785.09, stdev=5314.09 00:27:28.934 clat (usec): min=1115, max=531266, avg=177293.09, stdev=110934.81 00:27:28.934 lat (usec): min=1183, max=531306, avg=179078.19, stdev=112397.26 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 52], 20.00th=[ 79], 00:27:28.934 | 30.00th=[ 107], 40.00th=[ 144], 50.00th=[ 161], 60.00th=[ 180], 00:27:28.934 | 70.00th=[ 213], 80.00th=[ 257], 90.00th=[ 334], 95.00th=[ 418], 00:27:28.934 | 99.00th=[ 489], 99.50th=[ 514], 99.90th=[ 523], 99.95th=[ 531], 00:27:28.934 | 99.99th=[ 531] 00:27:28.934 bw ( KiB/s): min=32768, max=195462, per=9.88%, avg=91539.50, stdev=42078.61, samples=20 00:27:28.934 iops : min= 128, max= 763, avg=357.55, stdev=164.30, samples=20 00:27:28.934 lat (msec) : 2=0.19%, 4=0.03%, 10=0.52%, 20=1.48%, 50=7.01% 00:27:28.934 lat (msec) : 100=18.03%, 250=50.03%, 500=21.91%, 750=0.80% 00:27:28.934 cpu : usr=0.81%, sys=1.21%, ctx=2181, majf=0, minf=2 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,3638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job7: (groupid=0, jobs=1): err= 0: pid=3943355: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(678MiB/10178msec); 0 zone resets 00:27:28.934 slat (usec): min=21, max=158712, avg=2262.06, stdev=7246.47 00:27:28.934 clat (usec): min=1023, max=497462, avg=237983.67, stdev=127303.29 00:27:28.934 lat (usec): min=1072, max=502790, avg=240245.73, stdev=129010.63 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 4], 5.00th=[ 31], 10.00th=[ 59], 20.00th=[ 104], 00:27:28.934 | 30.00th=[ 163], 40.00th=[ 205], 50.00th=[ 245], 60.00th=[ 275], 00:27:28.934 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 414], 95.00th=[ 443], 00:27:28.934 | 99.00th=[ 477], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 498], 00:27:28.934 | 99.99th=[ 498] 00:27:28.934 bw ( KiB/s): min=34885, max=129536, per=7.31%, avg=67741.05, stdev=23187.41, samples=20 00:27:28.934 iops : min= 136, max= 506, avg=264.60, stdev=90.60, samples=20 00:27:28.934 lat (msec) : 2=0.44%, 4=0.74%, 10=1.03%, 20=0.85%, 50=5.09% 00:27:28.934 lat (msec) : 100=10.70%, 250=32.51%, 500=48.63% 00:27:28.934 cpu : usr=0.66%, sys=0.89%, ctx=1747, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,2710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job8: (groupid=0, jobs=1): err= 0: pid=3943356: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=352, BW=88.2MiB/s (92.5MB/s)(895MiB/10145msec); 0 zone resets 00:27:28.934 slat (usec): min=21, max=112035, avg=1974.57, stdev=5837.11 00:27:28.934 clat (usec): min=1022, max=496151, avg=179348.20, stdev=105925.86 00:27:28.934 lat (usec): min=1071, max=503052, avg=181322.77, stdev=106769.00 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 59], 20.00th=[ 101], 00:27:28.934 | 30.00th=[ 113], 40.00th=[ 140], 50.00th=[ 169], 60.00th=[ 192], 00:27:28.934 | 70.00th=[ 215], 80.00th=[ 251], 90.00th=[ 317], 95.00th=[ 409], 00:27:28.934 | 99.00th=[ 481], 99.50th=[ 485], 99.90th=[ 493], 99.95th=[ 498], 00:27:28.934 | 99.99th=[ 498] 00:27:28.934 bw ( KiB/s): min=43520, max=155136, per=9.71%, avg=89996.80, stdev=31697.08, samples=20 00:27:28.934 iops : min= 170, max= 606, avg=351.55, stdev=123.82, samples=20 00:27:28.934 lat (msec) : 2=0.31%, 4=1.51%, 10=2.35%, 20=1.68%, 50=3.41% 00:27:28.934 lat (msec) : 100=10.56%, 250=60.34%, 500=19.84% 00:27:28.934 cpu : usr=0.81%, sys=1.06%, ctx=1744, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,3578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job9: (groupid=0, jobs=1): err= 0: pid=3943357: Wed Nov 6 15:31:55 2024 00:27:28.934 write: IOPS=267, BW=67.0MiB/s (70.2MB/s)(682MiB/10183msec); 0 zone resets 00:27:28.934 slat (usec): min=27, max=174503, avg=3325.24, stdev=8061.66 00:27:28.934 clat (msec): min=6, max=546, avg=235.47, stdev=117.46 00:27:28.934 lat (msec): min=6, max=555, avg=238.79, stdev=118.90 00:27:28.934 clat percentiles (msec): 00:27:28.934 | 1.00th=[ 18], 5.00th=[ 92], 10.00th=[ 117], 20.00th=[ 142], 00:27:28.934 | 30.00th=[ 161], 40.00th=[ 178], 50.00th=[ 205], 60.00th=[ 232], 00:27:28.934 | 70.00th=[ 284], 80.00th=[ 351], 90.00th=[ 435], 95.00th=[ 451], 00:27:28.934 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 542], 99.95th=[ 542], 00:27:28.934 | 99.99th=[ 550] 00:27:28.934 bw ( KiB/s): min=35840, max=122880, per=7.36%, avg=68224.00, stdev=27005.72, samples=20 00:27:28.934 iops : min= 140, max= 480, avg=266.50, stdev=105.49, samples=20 00:27:28.934 lat (msec) : 10=0.15%, 20=1.36%, 50=1.54%, 100=3.89%, 250=57.44% 00:27:28.934 lat (msec) : 500=34.68%, 750=0.95% 00:27:28.934 cpu : usr=0.72%, sys=0.92%, ctx=928, majf=0, minf=1 00:27:28.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.934 issued rwts: total=0,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.934 job10: (groupid=0, jobs=1): err= 0: pid=3943358: Wed Nov 6 15:31:55 2024 00:27:28.935 write: IOPS=487, BW=122MiB/s (128MB/s)(1237MiB/10145msec); 0 zone resets 00:27:28.935 slat (usec): min=20, max=38673, avg=1157.32, stdev=3981.93 00:27:28.935 clat (usec): min=975, max=555589, avg=130005.00, stdev=109194.23 00:27:28.935 lat (usec): min=1015, max=555638, avg=131162.33, stdev=110168.11 00:27:28.935 clat percentiles (msec): 00:27:28.935 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 35], 00:27:28.935 | 30.00th=[ 51], 40.00th=[ 84], 50.00th=[ 104], 60.00th=[ 125], 00:27:28.935 | 70.00th=[ 171], 80.00th=[ 205], 90.00th=[ 275], 95.00th=[ 376], 00:27:28.935 | 99.00th=[ 472], 99.50th=[ 489], 99.90th=[ 542], 99.95th=[ 550], 00:27:28.935 | 99.99th=[ 558] 00:27:28.935 bw ( KiB/s): min=38912, max=305152, per=13.50%, avg=125077.80, stdev=66843.82, samples=20 00:27:28.935 iops : min= 152, max= 1192, avg=488.55, stdev=261.06, samples=20 00:27:28.935 lat (usec) : 1000=0.02% 00:27:28.935 lat (msec) : 2=0.46%, 4=1.31%, 10=3.56%, 20=5.90%, 50=18.80% 00:27:28.935 lat (msec) : 100=17.95%, 250=39.43%, 500=12.21%, 750=0.36% 00:27:28.935 cpu : usr=1.17%, sys=1.56%, ctx=3224, majf=0, minf=1 00:27:28.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.935 issued rwts: total=0,4948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.935 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.935 00:27:28.935 Run status group 0 (all jobs): 00:27:28.935 WRITE: bw=905MiB/s (949MB/s), 65.8MiB/s-122MiB/s (69.0MB/s-128MB/s), io=9217MiB (9664MB), run=10145-10187msec 00:27:28.935 00:27:28.935 Disk stats (read/write): 00:27:28.935 nvme0n1: ios=49/5424, merge=0/0, ticks=51/1243977, in_queue=1244028, util=97.38% 00:27:28.935 nvme10n1: ios=45/5348, merge=0/0, ticks=75/1243062, in_queue=1243137, util=97.73% 00:27:28.935 nvme1n1: ios=48/5710, merge=0/0, ticks=780/1240084, in_queue=1240864, util=99.97% 00:27:28.935 nvme2n1: ios=15/8717, merge=0/0, ticks=105/1246629, in_queue=1246734, util=97.87% 00:27:28.935 nvme3n1: ios=47/6204, merge=0/0, ticks=78/1248267, in_queue=1248345, util=98.23% 00:27:28.935 nvme4n1: ios=0/6855, merge=0/0, ticks=0/1206879, in_queue=1206879, util=98.13% 00:27:28.935 nvme5n1: ios=0/7255, merge=0/0, ticks=0/1247877, in_queue=1247877, util=98.34% 00:27:28.935 nvme6n1: ios=0/5405, merge=0/0, ticks=0/1250920, in_queue=1250920, util=98.44% 00:27:28.935 nvme7n1: ios=49/6978, merge=0/0, ticks=2948/1203115, in_queue=1206063, util=100.00% 00:27:28.935 nvme8n1: ios=0/5438, merge=0/0, ticks=0/1237703, in_queue=1237703, util=98.98% 00:27:28.935 nvme9n1: ios=0/9718, merge=0/0, ticks=0/1221116, in_queue=1221116, util=99.05% 00:27:28.935 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:28.935 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:28.935 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.935 15:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:28.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:28.935 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.194 15:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:29.761 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.761 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:30.020 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.020 15:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:30.589 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.589 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:31.158 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.158 15:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:31.416 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:31.416 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.675 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:31.934 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.934 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.192 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.192 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.192 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:32.451 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.451 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:32.710 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.710 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:32.969 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:32.970 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:32.970 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:32.970 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:32.970 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.229 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:33.487 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:33.487 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:33.487 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.488 15:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.488 rmmod nvme_tcp 00:27:33.488 rmmod nvme_fabrics 00:27:33.488 rmmod nvme_keyring 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3935435 ']' 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3935435 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 3935435 ']' 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 3935435 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3935435 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3935435' 00:27:33.488 killing process with pid 3935435 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 3935435 00:27:33.488 15:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 3935435 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.776 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.312 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.312 00:27:39.312 real 1m17.495s 00:27:39.312 user 4m39.736s 00:27:39.312 sys 0m16.938s 00:27:39.312 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:39.312 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.312 ************************************ 00:27:39.312 END TEST nvmf_multiconnection 00:27:39.312 ************************************ 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:39.313 ************************************ 00:27:39.313 START TEST nvmf_initiator_timeout 00:27:39.313 ************************************ 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:39.313 * Looking for test storage... 00:27:39.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:39.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.313 --rc genhtml_branch_coverage=1 00:27:39.313 --rc genhtml_function_coverage=1 00:27:39.313 --rc genhtml_legend=1 00:27:39.313 --rc geninfo_all_blocks=1 00:27:39.313 --rc geninfo_unexecuted_blocks=1 00:27:39.313 00:27:39.313 ' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:39.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.313 --rc genhtml_branch_coverage=1 00:27:39.313 --rc genhtml_function_coverage=1 00:27:39.313 --rc genhtml_legend=1 00:27:39.313 --rc geninfo_all_blocks=1 00:27:39.313 --rc geninfo_unexecuted_blocks=1 00:27:39.313 00:27:39.313 ' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:39.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.313 --rc genhtml_branch_coverage=1 00:27:39.313 --rc genhtml_function_coverage=1 00:27:39.313 --rc genhtml_legend=1 00:27:39.313 --rc geninfo_all_blocks=1 00:27:39.313 --rc geninfo_unexecuted_blocks=1 00:27:39.313 00:27:39.313 ' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:39.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.313 --rc genhtml_branch_coverage=1 00:27:39.313 --rc genhtml_function_coverage=1 00:27:39.313 --rc genhtml_legend=1 00:27:39.313 --rc geninfo_all_blocks=1 00:27:39.313 --rc geninfo_unexecuted_blocks=1 00:27:39.313 00:27:39.313 ' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:39.313 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.314 15:32:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.882 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.883 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.883 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:27:45.883 00:27:45.883 --- 10.0.0.2 ping statistics --- 00:27:45.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.883 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:27:45.883 00:27:45.883 --- 10.0.0.1 ping statistics --- 00:27:45.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.883 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3949261 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3949261 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 3949261 ']' 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:45.883 15:32:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:45.883 [2024-11-06 15:32:12.684447] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:45.883 [2024-11-06 15:32:12.684539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.883 [2024-11-06 15:32:12.814497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.883 [2024-11-06 15:32:12.914217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.883 [2024-11-06 15:32:12.914262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.883 [2024-11-06 15:32:12.914272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.883 [2024-11-06 15:32:12.914281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.883 [2024-11-06 15:32:12.914288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.883 [2024-11-06 15:32:12.916827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.883 [2024-11-06 15:32:12.916905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.883 [2024-11-06 15:32:12.916970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.883 [2024-11-06 15:32:12.916992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.883 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:45.883 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:27:45.884 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.884 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.884 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 Malloc0 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 Delay0 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 [2024-11-06 15:32:13.648489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.143 [2024-11-06 15:32:13.680769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.143 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:47.520 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:47.520 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:27:47.520 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:47.521 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:47.521 15:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3949980 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:49.434 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:49.434 [global] 00:27:49.434 thread=1 00:27:49.434 invalidate=1 00:27:49.434 rw=write 00:27:49.434 time_based=1 00:27:49.434 runtime=60 00:27:49.434 ioengine=libaio 00:27:49.434 direct=1 00:27:49.434 bs=4096 00:27:49.434 iodepth=1 00:27:49.434 norandommap=0 00:27:49.434 numjobs=1 00:27:49.434 00:27:49.434 verify_dump=1 00:27:49.434 verify_backlog=512 00:27:49.434 verify_state_save=0 00:27:49.434 do_verify=1 00:27:49.434 verify=crc32c-intel 00:27:49.434 [job0] 00:27:49.434 filename=/dev/nvme0n1 00:27:49.434 Could not set queue depth (nvme0n1) 00:27:49.691 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:49.691 fio-3.35 00:27:49.691 Starting 1 thread 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.221 true 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.221 true 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.221 true 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.221 true 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.221 15:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.495 true 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.495 true 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.495 true 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.495 true 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:55.495 15:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3949980 00:28:51.681 00:28:51.681 job0: (groupid=0, jobs=1): err= 0: pid=3950099: Wed Nov 6 15:33:17 2024 00:28:51.681 read: IOPS=410, BW=1643KiB/s (1683kB/s)(96.3MiB/60017msec) 00:28:51.681 slat (nsec): min=6907, max=44793, avg=8384.65, stdev=1878.78 00:28:51.681 clat (usec): min=205, max=41702k, avg=2201.09, stdev=265584.20 00:28:51.681 lat (usec): min=224, max=41702k, avg=2209.48, stdev=265584.30 00:28:51.681 clat percentiles (usec): 00:28:51.681 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 249], 00:28:51.681 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:28:51.681 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 449], 00:28:51.681 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:28:51.681 | 99.99th=[44303] 00:28:51.681 write: IOPS=418, BW=1672KiB/s (1712kB/s)(98.0MiB/60017msec); 0 zone resets 00:28:51.681 slat (usec): min=10, max=29322, avg=13.85, stdev=197.73 00:28:51.681 clat (usec): min=146, max=457, avg=200.64, stdev=24.94 00:28:51.681 lat (usec): min=171, max=29629, avg=214.50, stdev=200.32 00:28:51.681 clat percentiles (usec): 00:28:51.682 | 1.00th=[ 172], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 180], 00:28:51.682 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:28:51.682 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 241], 00:28:51.682 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 363], 00:28:51.682 | 99.99th=[ 375] 00:28:51.682 bw ( KiB/s): min= 2664, max= 8544, per=100.00%, avg=7433.48, stdev=1510.88, samples=27 00:28:51.682 iops : min= 666, max= 2136, avg=1858.37, stdev=377.72, samples=27 00:28:51.682 lat (usec) : 250=60.57%, 500=38.97%, 750=0.17% 00:28:51.682 lat (msec) : 2=0.01%, 50=0.29%, >=2000=0.01% 00:28:51.682 cpu : usr=0.69%, sys=1.36%, ctx=49752, majf=0, minf=35 00:28:51.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.682 issued rwts: total=24658,25088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:51.682 00:28:51.682 Run status group 0 (all jobs): 00:28:51.682 READ: bw=1643KiB/s (1683kB/s), 1643KiB/s-1643KiB/s (1683kB/s-1683kB/s), io=96.3MiB (101MB), run=60017-60017msec 00:28:51.682 WRITE: bw=1672KiB/s (1712kB/s), 1672KiB/s-1672KiB/s (1712kB/s-1712kB/s), io=98.0MiB (103MB), run=60017-60017msec 00:28:51.682 00:28:51.682 Disk stats (read/write): 00:28:51.682 nvme0n1: ios=24757/25088, merge=0/0, ticks=13747/4745, in_queue=18492, util=99.74% 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:51.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:51.682 nvmf hotplug test: fio successful as expected 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.682 rmmod nvme_tcp 00:28:51.682 rmmod nvme_fabrics 00:28:51.682 rmmod nvme_keyring 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3949261 ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3949261 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 3949261 ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 3949261 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3949261 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3949261' 00:28:51.682 killing process with pid 3949261 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 3949261 00:28:51.682 15:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 3949261 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.682 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.663 00:28:53.663 real 1m14.642s 00:28:53.663 user 4m28.627s 00:28:53.663 sys 0m7.744s 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:53.663 ************************************ 00:28:53.663 END TEST nvmf_initiator_timeout 00:28:53.663 ************************************ 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.663 15:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.236 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:00.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:00.237 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:00.237 Found net devices under 0000:86:00.0: cvl_0_0 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:00.237 Found net devices under 0000:86:00.1: cvl_0_1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.237 ************************************ 00:29:00.237 START TEST nvmf_perf_adq 00:29:00.237 ************************************ 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:00.237 * Looking for test storage... 00:29:00.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.237 --rc genhtml_branch_coverage=1 00:29:00.237 --rc genhtml_function_coverage=1 00:29:00.237 --rc genhtml_legend=1 00:29:00.237 --rc geninfo_all_blocks=1 00:29:00.237 --rc geninfo_unexecuted_blocks=1 00:29:00.237 00:29:00.237 ' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.237 --rc genhtml_branch_coverage=1 00:29:00.237 --rc genhtml_function_coverage=1 00:29:00.237 --rc genhtml_legend=1 00:29:00.237 --rc geninfo_all_blocks=1 00:29:00.237 --rc geninfo_unexecuted_blocks=1 00:29:00.237 00:29:00.237 ' 00:29:00.237 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.237 --rc genhtml_branch_coverage=1 00:29:00.238 --rc genhtml_function_coverage=1 00:29:00.238 --rc genhtml_legend=1 00:29:00.238 --rc geninfo_all_blocks=1 00:29:00.238 --rc geninfo_unexecuted_blocks=1 00:29:00.238 00:29:00.238 ' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:00.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.238 --rc genhtml_branch_coverage=1 00:29:00.238 --rc genhtml_function_coverage=1 00:29:00.238 --rc genhtml_legend=1 00:29:00.238 --rc geninfo_all_blocks=1 00:29:00.238 --rc geninfo_unexecuted_blocks=1 00:29:00.238 00:29:00.238 ' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.238 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.523 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.524 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.524 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.524 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.524 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:05.524 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:06.461 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:08.365 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.647 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:13.648 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:13.648 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:13.648 Found net devices under 0000:86:00.0: cvl_0_0 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:13.648 Found net devices under 0000:86:00.1: cvl_0_1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:29:13.648 00:29:13.648 --- 10.0.0.2 ping statistics --- 00:29:13.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.648 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:29:13.648 00:29:13.648 --- 10.0.0.1 ping statistics --- 00:29:13.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.648 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.648 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3968429 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3968429 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3968429 ']' 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.648 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.648 [2024-11-06 15:33:41.125397] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:13.648 [2024-11-06 15:33:41.125484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.648 [2024-11-06 15:33:41.253998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.908 [2024-11-06 15:33:41.363305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.908 [2024-11-06 15:33:41.363349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.908 [2024-11-06 15:33:41.363362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.908 [2024-11-06 15:33:41.363371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.908 [2024-11-06 15:33:41.363379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.908 [2024-11-06 15:33:41.365906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.908 [2024-11-06 15:33:41.365955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.908 [2024-11-06 15:33:41.366033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.908 [2024-11-06 15:33:41.366043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.476 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.476 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.045 [2024-11-06 15:33:42.381041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.045 Malloc1 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.045 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.046 [2024-11-06 15:33:42.509500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3968682 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:15.046 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:16.956 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:16.957 "tick_rate": 2100000000, 00:29:16.957 "poll_groups": [ 00:29:16.957 { 00:29:16.957 "name": "nvmf_tgt_poll_group_000", 00:29:16.957 "admin_qpairs": 1, 00:29:16.957 "io_qpairs": 1, 00:29:16.957 "current_admin_qpairs": 1, 00:29:16.957 "current_io_qpairs": 1, 00:29:16.957 "pending_bdev_io": 0, 00:29:16.957 "completed_nvme_io": 18290, 00:29:16.957 "transports": [ 00:29:16.957 { 00:29:16.957 "trtype": "TCP" 00:29:16.957 } 00:29:16.957 ] 00:29:16.957 }, 00:29:16.957 { 00:29:16.957 "name": "nvmf_tgt_poll_group_001", 00:29:16.957 "admin_qpairs": 0, 00:29:16.957 "io_qpairs": 1, 00:29:16.957 "current_admin_qpairs": 0, 00:29:16.957 "current_io_qpairs": 1, 00:29:16.957 "pending_bdev_io": 0, 00:29:16.957 "completed_nvme_io": 18190, 00:29:16.957 "transports": [ 00:29:16.957 { 00:29:16.957 "trtype": "TCP" 00:29:16.957 } 00:29:16.957 ] 00:29:16.957 }, 00:29:16.957 { 00:29:16.957 "name": "nvmf_tgt_poll_group_002", 00:29:16.957 "admin_qpairs": 0, 00:29:16.957 "io_qpairs": 1, 00:29:16.957 "current_admin_qpairs": 0, 00:29:16.957 "current_io_qpairs": 1, 00:29:16.957 "pending_bdev_io": 0, 00:29:16.957 "completed_nvme_io": 18200, 00:29:16.957 "transports": [ 00:29:16.957 { 00:29:16.957 "trtype": "TCP" 00:29:16.957 } 00:29:16.957 ] 00:29:16.957 }, 00:29:16.957 { 00:29:16.957 "name": "nvmf_tgt_poll_group_003", 00:29:16.957 "admin_qpairs": 0, 00:29:16.957 "io_qpairs": 1, 00:29:16.957 "current_admin_qpairs": 0, 00:29:16.957 "current_io_qpairs": 1, 00:29:16.957 "pending_bdev_io": 0, 00:29:16.957 "completed_nvme_io": 18086, 00:29:16.957 "transports": [ 00:29:16.957 { 00:29:16.957 "trtype": "TCP" 00:29:16.957 } 00:29:16.957 ] 00:29:16.957 } 00:29:16.957 ] 00:29:16.957 }' 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:16.957 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3968682 00:29:26.941 Initializing NVMe Controllers 00:29:26.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:26.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:26.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:26.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:26.941 Initialization complete. Launching workers. 00:29:26.941 ======================================================== 00:29:26.941 Latency(us) 00:29:26.941 Device Information : IOPS MiB/s Average min max 00:29:26.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9869.10 38.55 6486.19 2763.44 10443.12 00:29:26.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10032.70 39.19 6378.61 2447.50 10636.03 00:29:26.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10013.20 39.11 6391.04 2514.21 10760.94 00:29:26.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9990.00 39.02 6405.49 2659.49 10579.37 00:29:26.941 ======================================================== 00:29:26.941 Total : 39904.99 155.88 6415.06 2447.50 10760.94 00:29:26.941 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.942 rmmod nvme_tcp 00:29:26.942 rmmod nvme_fabrics 00:29:26.942 rmmod nvme_keyring 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3968429 ']' 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3968429 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3968429 ']' 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3968429 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3968429 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3968429' 00:29:26.942 killing process with pid 3968429 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3968429 00:29:26.942 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3968429 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.942 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.848 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.848 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:28.848 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:28.848 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:29.787 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:32.323 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:37.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:37.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:37.602 Found net devices under 0000:86:00.0: cvl_0_0 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:37.602 Found net devices under 0000:86:00.1: cvl_0_1 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.602 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:29:37.603 00:29:37.603 --- 10.0.0.2 ping statistics --- 00:29:37.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.603 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:37.603 00:29:37.603 --- 10.0.0.1 ping statistics --- 00:29:37.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.603 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:37.603 net.core.busy_poll = 1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:37.603 net.core.busy_read = 1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3972470 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3972470 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3972470 ']' 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:37.603 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.603 [2024-11-06 15:34:05.061251] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:37.603 [2024-11-06 15:34:05.061349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.603 [2024-11-06 15:34:05.191361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.863 [2024-11-06 15:34:05.296351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.863 [2024-11-06 15:34:05.296400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.863 [2024-11-06 15:34:05.296411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.863 [2024-11-06 15:34:05.296420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.863 [2024-11-06 15:34:05.296428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.863 [2024-11-06 15:34:05.299039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.863 [2024-11-06 15:34:05.299118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.863 [2024-11-06 15:34:05.299185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.863 [2024-11-06 15:34:05.299226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.432 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.691 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.691 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:38.691 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.691 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.691 [2024-11-06 15:34:06.303428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.950 Malloc1 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.950 [2024-11-06 15:34:06.435108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3972729 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:38.950 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:40.856 "tick_rate": 2100000000, 00:29:40.856 "poll_groups": [ 00:29:40.856 { 00:29:40.856 "name": "nvmf_tgt_poll_group_000", 00:29:40.856 "admin_qpairs": 1, 00:29:40.856 "io_qpairs": 2, 00:29:40.856 "current_admin_qpairs": 1, 00:29:40.856 "current_io_qpairs": 2, 00:29:40.856 "pending_bdev_io": 0, 00:29:40.856 "completed_nvme_io": 25608, 00:29:40.856 "transports": [ 00:29:40.856 { 00:29:40.856 "trtype": "TCP" 00:29:40.856 } 00:29:40.856 ] 00:29:40.856 }, 00:29:40.856 { 00:29:40.856 "name": "nvmf_tgt_poll_group_001", 00:29:40.856 "admin_qpairs": 0, 00:29:40.856 "io_qpairs": 2, 00:29:40.856 "current_admin_qpairs": 0, 00:29:40.856 "current_io_qpairs": 2, 00:29:40.856 "pending_bdev_io": 0, 00:29:40.856 "completed_nvme_io": 23759, 00:29:40.856 "transports": [ 00:29:40.856 { 00:29:40.856 "trtype": "TCP" 00:29:40.856 } 00:29:40.856 ] 00:29:40.856 }, 00:29:40.856 { 00:29:40.856 "name": "nvmf_tgt_poll_group_002", 00:29:40.856 "admin_qpairs": 0, 00:29:40.856 "io_qpairs": 0, 00:29:40.856 "current_admin_qpairs": 0, 00:29:40.856 "current_io_qpairs": 0, 00:29:40.856 "pending_bdev_io": 0, 00:29:40.856 "completed_nvme_io": 0, 00:29:40.856 "transports": [ 00:29:40.856 { 00:29:40.856 "trtype": "TCP" 00:29:40.856 } 00:29:40.856 ] 00:29:40.856 }, 00:29:40.856 { 00:29:40.856 "name": "nvmf_tgt_poll_group_003", 00:29:40.856 "admin_qpairs": 0, 00:29:40.856 "io_qpairs": 0, 00:29:40.856 "current_admin_qpairs": 0, 00:29:40.856 "current_io_qpairs": 0, 00:29:40.856 "pending_bdev_io": 0, 00:29:40.856 "completed_nvme_io": 0, 00:29:40.856 "transports": [ 00:29:40.856 { 00:29:40.856 "trtype": "TCP" 00:29:40.856 } 00:29:40.856 ] 00:29:40.856 } 00:29:40.856 ] 00:29:40.856 }' 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:40.856 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:41.115 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:41.115 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:41.115 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3972729 00:29:49.238 Initializing NVMe Controllers 00:29:49.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:49.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:49.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:49.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:49.238 Initialization complete. Launching workers. 00:29:49.238 ======================================================== 00:29:49.238 Latency(us) 00:29:49.238 Device Information : IOPS MiB/s Average min max 00:29:49.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6930.90 27.07 9237.42 1930.61 54738.66 00:29:49.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6824.20 26.66 9383.83 1366.87 53422.56 00:29:49.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6661.70 26.02 9608.76 1709.82 53813.27 00:29:49.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6653.50 25.99 9650.68 1534.60 54616.30 00:29:49.238 ======================================================== 00:29:49.238 Total : 27070.28 105.74 9467.28 1366.87 54738.66 00:29:49.238 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.238 rmmod nvme_tcp 00:29:49.238 rmmod nvme_fabrics 00:29:49.238 rmmod nvme_keyring 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.238 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3972470 ']' 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3972470 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3972470 ']' 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3972470 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3972470 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3972470' 00:29:49.239 killing process with pid 3972470 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3972470 00:29:49.239 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3972470 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.620 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:53.158 00:29:53.158 real 0m53.501s 00:29:53.158 user 2m58.912s 00:29:53.158 sys 0m10.575s 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:53.158 ************************************ 00:29:53.158 END TEST nvmf_perf_adq 00:29:53.158 ************************************ 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:53.158 ************************************ 00:29:53.158 START TEST nvmf_shutdown 00:29:53.158 ************************************ 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:53.158 * Looking for test storage... 00:29:53.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.158 --rc genhtml_branch_coverage=1 00:29:53.158 --rc genhtml_function_coverage=1 00:29:53.158 --rc genhtml_legend=1 00:29:53.158 --rc geninfo_all_blocks=1 00:29:53.158 --rc geninfo_unexecuted_blocks=1 00:29:53.158 00:29:53.158 ' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.158 --rc genhtml_branch_coverage=1 00:29:53.158 --rc genhtml_function_coverage=1 00:29:53.158 --rc genhtml_legend=1 00:29:53.158 --rc geninfo_all_blocks=1 00:29:53.158 --rc geninfo_unexecuted_blocks=1 00:29:53.158 00:29:53.158 ' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.158 --rc genhtml_branch_coverage=1 00:29:53.158 --rc genhtml_function_coverage=1 00:29:53.158 --rc genhtml_legend=1 00:29:53.158 --rc geninfo_all_blocks=1 00:29:53.158 --rc geninfo_unexecuted_blocks=1 00:29:53.158 00:29:53.158 ' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.158 --rc genhtml_branch_coverage=1 00:29:53.158 --rc genhtml_function_coverage=1 00:29:53.158 --rc genhtml_legend=1 00:29:53.158 --rc geninfo_all_blocks=1 00:29:53.158 --rc geninfo_unexecuted_blocks=1 00:29:53.158 00:29:53.158 ' 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.158 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.159 ************************************ 00:29:53.159 START TEST nvmf_shutdown_tc1 00:29:53.159 ************************************ 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.159 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.735 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.735 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.735 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:29:59.736 00:29:59.736 --- 10.0.0.2 ping statistics --- 00:29:59.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.736 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:29:59.736 00:29:59.736 --- 10.0.0.1 ping statistics --- 00:29:59.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.736 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3978170 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3978170 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3978170 ']' 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.736 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.736 [2024-11-06 15:34:26.714107] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:59.736 [2024-11-06 15:34:26.714194] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.736 [2024-11-06 15:34:26.842272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.736 [2024-11-06 15:34:26.945658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.736 [2024-11-06 15:34:26.945700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.736 [2024-11-06 15:34:26.945711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.736 [2024-11-06 15:34:26.945722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.736 [2024-11-06 15:34:26.945730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.736 [2024-11-06 15:34:26.948364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.736 [2024-11-06 15:34:26.948452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.736 [2024-11-06 15:34:26.948519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.736 [2024-11-06 15:34:26.948545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.995 [2024-11-06 15:34:27.570151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.995 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:59.996 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.256 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.256 Malloc1 00:30:00.256 [2024-11-06 15:34:27.747987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.256 Malloc2 00:30:00.515 Malloc3 00:30:00.515 Malloc4 00:30:00.515 Malloc5 00:30:00.774 Malloc6 00:30:00.774 Malloc7 00:30:01.033 Malloc8 00:30:01.033 Malloc9 00:30:01.033 Malloc10 00:30:01.033 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.033 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:01.033 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.033 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:01.293 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3978595 00:30:01.293 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3978595 /var/tmp/bdevperf.sock 00:30:01.293 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3978595 ']' 00:30:01.293 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.293 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.294 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.294 { 00:30:01.294 "params": { 00:30:01.294 "name": "Nvme$subsystem", 00:30:01.294 "trtype": "$TEST_TRANSPORT", 00:30:01.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.294 "adrfam": "ipv4", 00:30:01.294 "trsvcid": "$NVMF_PORT", 00:30:01.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.294 "hdgst": ${hdgst:-false}, 00:30:01.294 "ddgst": ${ddgst:-false} 00:30:01.294 }, 00:30:01.294 "method": "bdev_nvme_attach_controller" 00:30:01.294 } 00:30:01.294 EOF 00:30:01.294 )") 00:30:01.295 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:01.295 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:01.295 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:01.295 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme1", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme2", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme3", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme4", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme5", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme6", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme7", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme8", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme9", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 },{ 00:30:01.295 "params": { 00:30:01.295 "name": "Nvme10", 00:30:01.295 "trtype": "tcp", 00:30:01.295 "traddr": "10.0.0.2", 00:30:01.295 "adrfam": "ipv4", 00:30:01.295 "trsvcid": "4420", 00:30:01.295 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:01.295 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:01.295 "hdgst": false, 00:30:01.295 "ddgst": false 00:30:01.295 }, 00:30:01.295 "method": "bdev_nvme_attach_controller" 00:30:01.295 }' 00:30:01.295 [2024-11-06 15:34:28.773574] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:01.295 [2024-11-06 15:34:28.773666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:01.295 [2024-11-06 15:34:28.904413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.555 [2024-11-06 15:34:29.010083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3978595 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:02.944 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:04.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3978595 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3978170 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.328 "adrfam": "ipv4", 00:30:04.328 "trsvcid": "$NVMF_PORT", 00:30:04.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.328 "hdgst": ${hdgst:-false}, 00:30:04.328 "ddgst": ${ddgst:-false} 00:30:04.328 }, 00:30:04.328 "method": "bdev_nvme_attach_controller" 00:30:04.328 } 00:30:04.328 EOF 00:30:04.328 )") 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.328 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.328 { 00:30:04.328 "params": { 00:30:04.328 "name": "Nvme$subsystem", 00:30:04.328 "trtype": "$TEST_TRANSPORT", 00:30:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "$NVMF_PORT", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.329 "hdgst": ${hdgst:-false}, 00:30:04.329 "ddgst": ${ddgst:-false} 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 } 00:30:04.329 EOF 00:30:04.329 )") 00:30:04.329 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:04.329 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:04.329 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:04.329 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme1", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme2", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme3", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme4", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme5", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme6", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme7", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme8", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme9", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 },{ 00:30:04.329 "params": { 00:30:04.329 "name": "Nvme10", 00:30:04.329 "trtype": "tcp", 00:30:04.329 "traddr": "10.0.0.2", 00:30:04.329 "adrfam": "ipv4", 00:30:04.329 "trsvcid": "4420", 00:30:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:04.329 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:04.329 "hdgst": false, 00:30:04.329 "ddgst": false 00:30:04.329 }, 00:30:04.329 "method": "bdev_nvme_attach_controller" 00:30:04.329 }' 00:30:04.329 [2024-11-06 15:34:31.634714] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:04.329 [2024-11-06 15:34:31.634808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3979067 ] 00:30:04.329 [2024-11-06 15:34:31.766688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.329 [2024-11-06 15:34:31.886867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.325 Running I/O for 1 seconds... 00:30:07.153 1933.00 IOPS, 120.81 MiB/s 00:30:07.153 Latency(us) 00:30:07.153 [2024-11-06T14:34:34.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.153 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme1n1 : 1.11 235.70 14.73 0.00 0.00 265791.70 8426.06 237677.23 00:30:07.153 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme2n1 : 1.11 231.58 14.47 0.00 0.00 269435.37 31082.79 240673.16 00:30:07.153 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme3n1 : 1.12 232.29 14.52 0.00 0.00 263957.52 1755.43 242670.45 00:30:07.153 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme4n1 : 1.16 274.83 17.18 0.00 0.00 220160.59 16602.45 254654.17 00:30:07.153 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme5n1 : 1.13 226.42 14.15 0.00 0.00 263020.50 16352.79 244667.73 00:30:07.153 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme6n1 : 1.13 229.80 14.36 0.00 0.00 254000.85 4962.01 246665.02 00:30:07.153 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme7n1 : 1.17 273.38 17.09 0.00 0.00 211715.61 15978.30 255652.82 00:30:07.153 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme8n1 : 1.17 272.36 17.02 0.00 0.00 209231.29 16352.79 238675.87 00:30:07.153 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme9n1 : 1.16 221.07 13.82 0.00 0.00 253214.96 21221.18 251658.24 00:30:07.153 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.153 Verification LBA range: start 0x0 length 0x400 00:30:07.153 Nvme10n1 : 1.22 262.05 16.38 0.00 0.00 204260.30 10922.67 265639.25 00:30:07.153 [2024-11-06T14:34:34.791Z] =================================================================================================================== 00:30:07.153 [2024-11-06T14:34:34.791Z] Total : 2459.48 153.72 0.00 0.00 238844.20 1755.43 265639.25 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.533 rmmod nvme_tcp 00:30:08.533 rmmod nvme_fabrics 00:30:08.533 rmmod nvme_keyring 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3978170 ']' 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3978170 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3978170 ']' 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3978170 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3978170 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3978170' 00:30:08.533 killing process with pid 3978170 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3978170 00:30:08.533 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3978170 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.824 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.732 00:30:13.732 real 0m20.458s 00:30:13.732 user 0m53.730s 00:30:13.732 sys 0m6.322s 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:13.732 ************************************ 00:30:13.732 END TEST nvmf_shutdown_tc1 00:30:13.732 ************************************ 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:13.732 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:13.733 ************************************ 00:30:13.733 START TEST nvmf_shutdown_tc2 00:30:13.733 ************************************ 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:13.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:13.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:13.733 Found net devices under 0000:86:00.0: cvl_0_0 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:13.733 Found net devices under 0000:86:00.1: cvl_0_1 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.733 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.734 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:30:13.994 00:30:13.994 --- 10.0.0.2 ping statistics --- 00:30:13.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.994 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:30:13.994 00:30:13.994 --- 10.0.0.1 ping statistics --- 00:30:13.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.994 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3980663 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3980663 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3980663 ']' 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.994 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.994 [2024-11-06 15:34:41.512436] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:13.994 [2024-11-06 15:34:41.512522] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.254 [2024-11-06 15:34:41.642014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.254 [2024-11-06 15:34:41.751732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.254 [2024-11-06 15:34:41.751775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.254 [2024-11-06 15:34:41.751786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.254 [2024-11-06 15:34:41.751799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.254 [2024-11-06 15:34:41.751806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.254 [2024-11-06 15:34:41.754359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.254 [2024-11-06 15:34:41.754438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.254 [2024-11-06 15:34:41.754503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.254 [2024-11-06 15:34:41.754526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.822 [2024-11-06 15:34:42.352668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.822 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.823 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.082 Malloc1 00:30:15.082 [2024-11-06 15:34:42.532008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.082 Malloc2 00:30:15.082 Malloc3 00:30:15.341 Malloc4 00:30:15.341 Malloc5 00:30:15.600 Malloc6 00:30:15.600 Malloc7 00:30:15.600 Malloc8 00:30:15.860 Malloc9 00:30:15.860 Malloc10 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3981157 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3981157 /var/tmp/bdevperf.sock 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3981157 ']' 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.860 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.861 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.861 { 00:30:15.861 "params": { 00:30:15.861 "name": "Nvme$subsystem", 00:30:15.861 "trtype": "$TEST_TRANSPORT", 00:30:15.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.861 "adrfam": "ipv4", 00:30:15.861 "trsvcid": "$NVMF_PORT", 00:30:15.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.861 "hdgst": ${hdgst:-false}, 00:30:15.861 "ddgst": ${ddgst:-false} 00:30:15.861 }, 00:30:15.861 "method": "bdev_nvme_attach_controller" 00:30:15.861 } 00:30:15.861 EOF 00:30:15.861 )") 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.121 { 00:30:16.121 "params": { 00:30:16.121 "name": "Nvme$subsystem", 00:30:16.121 "trtype": "$TEST_TRANSPORT", 00:30:16.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.121 "adrfam": "ipv4", 00:30:16.121 "trsvcid": "$NVMF_PORT", 00:30:16.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.121 "hdgst": ${hdgst:-false}, 00:30:16.121 "ddgst": ${ddgst:-false} 00:30:16.121 }, 00:30:16.121 "method": "bdev_nvme_attach_controller" 00:30:16.121 } 00:30:16.121 EOF 00:30:16.121 )") 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.121 { 00:30:16.121 "params": { 00:30:16.121 "name": "Nvme$subsystem", 00:30:16.121 "trtype": "$TEST_TRANSPORT", 00:30:16.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.121 "adrfam": "ipv4", 00:30:16.121 "trsvcid": "$NVMF_PORT", 00:30:16.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.121 "hdgst": ${hdgst:-false}, 00:30:16.121 "ddgst": ${ddgst:-false} 00:30:16.121 }, 00:30:16.121 "method": "bdev_nvme_attach_controller" 00:30:16.121 } 00:30:16.121 EOF 00:30:16.121 )") 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.121 { 00:30:16.121 "params": { 00:30:16.121 "name": "Nvme$subsystem", 00:30:16.121 "trtype": "$TEST_TRANSPORT", 00:30:16.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.121 "adrfam": "ipv4", 00:30:16.121 "trsvcid": "$NVMF_PORT", 00:30:16.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.121 "hdgst": ${hdgst:-false}, 00:30:16.121 "ddgst": ${ddgst:-false} 00:30:16.121 }, 00:30:16.121 "method": "bdev_nvme_attach_controller" 00:30:16.121 } 00:30:16.121 EOF 00:30:16.121 )") 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:30:16.121 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.121 "params": { 00:30:16.121 "name": "Nvme1", 00:30:16.121 "trtype": "tcp", 00:30:16.121 "traddr": "10.0.0.2", 00:30:16.121 "adrfam": "ipv4", 00:30:16.121 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme2", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme3", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme4", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme5", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme6", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme7", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme8", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme9", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 },{ 00:30:16.122 "params": { 00:30:16.122 "name": "Nvme10", 00:30:16.122 "trtype": "tcp", 00:30:16.122 "traddr": "10.0.0.2", 00:30:16.122 "adrfam": "ipv4", 00:30:16.122 "trsvcid": "4420", 00:30:16.122 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:16.122 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:16.122 "hdgst": false, 00:30:16.122 "ddgst": false 00:30:16.122 }, 00:30:16.122 "method": "bdev_nvme_attach_controller" 00:30:16.122 }' 00:30:16.122 [2024-11-06 15:34:43.528688] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:16.122 [2024-11-06 15:34:43.528777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981157 ] 00:30:16.122 [2024-11-06 15:34:43.657060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.381 [2024-11-06 15:34:43.772525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.288 Running I/O for 10 seconds... 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3981157 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3981157 ']' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3981157 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:18.547 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3981157 00:30:18.807 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:18.807 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:18.807 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3981157' 00:30:18.807 killing process with pid 3981157 00:30:18.807 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3981157 00:30:18.807 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3981157 00:30:18.807 Received shutdown signal, test time was about 0.774630 seconds 00:30:18.807 00:30:18.807 Latency(us) 00:30:18.807 [2024-11-06T14:34:46.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme1n1 : 0.76 252.48 15.78 0.00 0.00 250106.31 37449.14 224694.86 00:30:18.807 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme2n1 : 0.75 262.66 16.42 0.00 0.00 232926.30 5149.26 241671.80 00:30:18.807 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme3n1 : 0.74 265.42 16.59 0.00 0.00 224678.78 6459.98 224694.86 00:30:18.807 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme4n1 : 0.74 261.08 16.32 0.00 0.00 224489.08 16852.11 235679.94 00:30:18.807 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme5n1 : 0.75 256.37 16.02 0.00 0.00 223758.30 17351.44 241671.80 00:30:18.807 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme6n1 : 0.77 250.50 15.66 0.00 0.00 224110.04 20472.20 241671.80 00:30:18.807 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme7n1 : 0.76 253.44 15.84 0.00 0.00 215631.97 17351.44 240673.16 00:30:18.807 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme8n1 : 0.77 249.27 15.58 0.00 0.00 214255.91 16976.94 248662.31 00:30:18.807 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme9n1 : 0.77 248.08 15.50 0.00 0.00 209312.43 15978.30 241671.80 00:30:18.807 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.807 Verification LBA range: start 0x0 length 0x400 00:30:18.807 Nvme10n1 : 0.72 177.32 11.08 0.00 0.00 281052.89 19848.05 263641.97 00:30:18.807 [2024-11-06T14:34:46.445Z] =================================================================================================================== 00:30:18.807 [2024-11-06T14:34:46.445Z] Total : 2476.62 154.79 0.00 0.00 228277.63 5149.26 263641.97 00:30:19.744 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.122 rmmod nvme_tcp 00:30:21.122 rmmod nvme_fabrics 00:30:21.122 rmmod nvme_keyring 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3980663 ']' 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3980663 ']' 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3980663' 00:30:21.122 killing process with pid 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3980663 00:30:21.122 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3980663 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.411 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.317 00:30:26.317 real 0m12.434s 00:30:26.317 user 0m41.523s 00:30:26.317 sys 0m1.625s 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.317 ************************************ 00:30:26.317 END TEST nvmf_shutdown_tc2 00:30:26.317 ************************************ 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:26.317 ************************************ 00:30:26.317 START TEST nvmf_shutdown_tc3 00:30:26.317 ************************************ 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.317 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:26.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:26.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:26.318 Found net devices under 0000:86:00.0: cvl_0_0 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:26.318 Found net devices under 0000:86:00.1: cvl_0_1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:30:26.318 00:30:26.318 --- 10.0.0.2 ping statistics --- 00:30:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.318 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:30:26.318 00:30:26.318 --- 10.0.0.1 ping statistics --- 00:30:26.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.318 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.318 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3982887 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3982887 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3982887 ']' 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:26.319 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.578 [2024-11-06 15:34:54.027665] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:26.578 [2024-11-06 15:34:54.027767] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.578 [2024-11-06 15:34:54.158314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.836 [2024-11-06 15:34:54.263065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.836 [2024-11-06 15:34:54.263112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.836 [2024-11-06 15:34:54.263122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.836 [2024-11-06 15:34:54.263133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.836 [2024-11-06 15:34:54.263141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.836 [2024-11-06 15:34:54.265744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.836 [2024-11-06 15:34:54.265824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.836 [2024-11-06 15:34:54.265889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.836 [2024-11-06 15:34:54.265912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.404 [2024-11-06 15:34:54.883236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.404 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.404 Malloc1 00:30:27.664 [2024-11-06 15:34:55.049470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.664 Malloc2 00:30:27.664 Malloc3 00:30:27.923 Malloc4 00:30:27.923 Malloc5 00:30:27.923 Malloc6 00:30:28.182 Malloc7 00:30:28.182 Malloc8 00:30:28.182 Malloc9 00:30:28.442 Malloc10 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3983177 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3983177 /var/tmp/bdevperf.sock 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3983177 ']' 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.442 { 00:30:28.442 "params": { 00:30:28.442 "name": "Nvme$subsystem", 00:30:28.442 "trtype": "$TEST_TRANSPORT", 00:30:28.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.442 "adrfam": "ipv4", 00:30:28.442 "trsvcid": "$NVMF_PORT", 00:30:28.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.442 "hdgst": ${hdgst:-false}, 00:30:28.442 "ddgst": ${ddgst:-false} 00:30:28.442 }, 00:30:28.442 "method": "bdev_nvme_attach_controller" 00:30:28.442 } 00:30:28.442 EOF 00:30:28.442 )") 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.442 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.442 { 00:30:28.442 "params": { 00:30:28.442 "name": "Nvme$subsystem", 00:30:28.442 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.443 { 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme$subsystem", 00:30:28.443 "trtype": "$TEST_TRANSPORT", 00:30:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "$NVMF_PORT", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.443 "hdgst": ${hdgst:-false}, 00:30:28.443 "ddgst": ${ddgst:-false} 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 } 00:30:28.443 EOF 00:30:28.443 )") 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:30:28.443 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme1", 00:30:28.443 "trtype": "tcp", 00:30:28.443 "traddr": "10.0.0.2", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "4420", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.443 "hdgst": false, 00:30:28.443 "ddgst": false 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 },{ 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme2", 00:30:28.443 "trtype": "tcp", 00:30:28.443 "traddr": "10.0.0.2", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "4420", 00:30:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.443 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.443 "hdgst": false, 00:30:28.443 "ddgst": false 00:30:28.443 }, 00:30:28.443 "method": "bdev_nvme_attach_controller" 00:30:28.443 },{ 00:30:28.443 "params": { 00:30:28.443 "name": "Nvme3", 00:30:28.443 "trtype": "tcp", 00:30:28.443 "traddr": "10.0.0.2", 00:30:28.443 "adrfam": "ipv4", 00:30:28.443 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme4", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme5", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme6", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme7", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme8", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme9", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 },{ 00:30:28.444 "params": { 00:30:28.444 "name": "Nvme10", 00:30:28.444 "trtype": "tcp", 00:30:28.444 "traddr": "10.0.0.2", 00:30:28.444 "adrfam": "ipv4", 00:30:28.444 "trsvcid": "4420", 00:30:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:28.444 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:28.444 "hdgst": false, 00:30:28.444 "ddgst": false 00:30:28.444 }, 00:30:28.444 "method": "bdev_nvme_attach_controller" 00:30:28.444 }' 00:30:28.444 [2024-11-06 15:34:56.036753] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:28.444 [2024-11-06 15:34:56.036846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983177 ] 00:30:28.703 [2024-11-06 15:34:56.162997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.703 [2024-11-06 15:34:56.269384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.608 Running I/O for 10 seconds... 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:31.176 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3982887 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3982887 ']' 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3982887 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.445 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3982887 00:30:31.445 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:31.445 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:31.445 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3982887' 00:30:31.445 killing process with pid 3982887 00:30:31.445 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3982887 00:30:31.445 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3982887 00:30:31.445 [2024-11-06 15:34:59.024696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.024998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.445 [2024-11-06 15:34:59.025059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.025308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.027937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.027961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.027970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.027979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.027992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.446 [2024-11-06 15:34:59.028421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.028500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.031964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.032547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.033624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.447 [2024-11-06 15:34:59.033664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.447 [2024-11-06 15:34:59.033680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.447 [2024-11-06 15:34:59.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.447 [2024-11-06 15:34:59.033718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.447 [2024-11-06 15:34:59.033729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.447 [2024-11-06 15:34:59.033739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.447 [2024-11-06 15:34:59.033750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.447 [2024-11-06 15:34:59.033760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000336380 is same with the state(6) to be set 00:30:31.447 [2024-11-06 15:34:59.033838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.033851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.033863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.033872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.033884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.033894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.033905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.033916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.033924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032fa80 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.033999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.034121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.448 [2024-11-06 15:34:59.034198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.448 [2024-11-06 15:34:59.034216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.037996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.448 [2024-11-06 15:34:59.038175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.038232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.040996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.449 [2024-11-06 15:34:59.041165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.044272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.044302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.044314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.046998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.047178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.450 [2024-11-06 15:34:59.048592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.048998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.049006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:30:31.451 [2024-11-06 15:34:59.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.051934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.051958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.051970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.051984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.051999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.451 [2024-11-06 15:34:59.052321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.451 [2024-11-06 15:34:59.052330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.052986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.052997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.452 [2024-11-06 15:34:59.053258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.452 [2024-11-06 15:34:59.053269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.053428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.053476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.453 [2024-11-06 15:34:59.054193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.453 [2024-11-06 15:34:59.054946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.453 [2024-11-06 15:34:59.054957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.054968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.054977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.054991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.454 [2024-11-06 15:34:59.055692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.055736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.454 [2024-11-06 15:34:59.056310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000336380 (9): Bad file descriptor 00:30:31.454 [2024-11-06 15:34:59.056373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.454 [2024-11-06 15:34:59.056388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.454 [2024-11-06 15:34:59.056402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.454 [2024-11-06 15:34:59.056411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334580 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.056500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000335480 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.056612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032fa80 (9): Bad file descriptor 00:30:31.455 [2024-11-06 15:34:59.056648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000332780 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.056772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333680 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.056883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.056959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.056968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330980 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.056996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.057053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.455 [2024-11-06 15:34:59.057074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331880 is same with the state(6) to be set 00:30:31.455 [2024-11-06 15:34:59.057102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032eb80 (9): Bad file descriptor 00:30:31.455 [2024-11-06 15:34:59.057123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:30:31.455 [2024-11-06 15:34:59.057182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.455 [2024-11-06 15:34:59.057499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.455 [2024-11-06 15:34:59.057510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.057985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.057999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.456 [2024-11-06 15:34:59.058402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.456 [2024-11-06 15:34:59.058412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.058653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.058662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.061325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:31.457 [2024-11-06 15:34:59.061368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330980 (9): Bad file descriptor 00:30:31.457 [2024-11-06 15:34:59.062720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:31.457 [2024-11-06 15:34:59.062754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:31.457 [2024-11-06 15:34:59.062787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000333680 (9): Bad file descriptor 00:30:31.457 [2024-11-06 15:34:59.063977] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.064114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.457 [2024-11-06 15:34:59.064139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330980 with addr=10.0.0.2, port=4420 00:30:31.457 [2024-11-06 15:34:59.064154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330980 is same with the state(6) to be set 00:30:31.457 [2024-11-06 15:34:59.064306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.457 [2024-11-06 15:34:59.064323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:30:31.457 [2024-11-06 15:34:59.064334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:30:31.457 [2024-11-06 15:34:59.064802] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.064871] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.065028] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.065089] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.065253] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:31.457 [2024-11-06 15:34:59.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.457 [2024-11-06 15:34:59.065407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000333680 with addr=10.0.0.2, port=4420 00:30:31.457 [2024-11-06 15:34:59.065420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333680 is same with the state(6) to be set 00:30:31.457 [2024-11-06 15:34:59.065435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330980 (9): Bad file descriptor 00:30:31.457 [2024-11-06 15:34:59.065451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:30:31.457 [2024-11-06 15:34:59.065651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.065983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.065995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.066017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.066039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.066061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.066084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.457 [2024-11-06 15:34:59.066107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.457 [2024-11-06 15:34:59.066119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.458 [2024-11-06 15:34:59.066910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.458 [2024-11-06 15:34:59.066920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.066931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.066941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.066954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.066963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.066975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.066985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.066997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.067117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.067129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500033ac00 is same with the state(6) to be set 00:30:31.459 [2024-11-06 15:34:59.067461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000333680 (9): Bad file descriptor 00:30:31.459 [2024-11-06 15:34:59.067479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:31.459 [2024-11-06 15:34:59.067492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:31.459 [2024-11-06 15:34:59.067504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:31.459 [2024-11-06 15:34:59.067516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:31.459 [2024-11-06 15:34:59.067530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:31.459 [2024-11-06 15:34:59.067540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:31.459 [2024-11-06 15:34:59.067548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:31.459 [2024-11-06 15:34:59.067557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:31.459 [2024-11-06 15:34:59.067591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000334580 (9): Bad file descriptor 00:30:31.459 [2024-11-06 15:34:59.067609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000335480 (9): Bad file descriptor 00:30:31.459 [2024-11-06 15:34:59.067639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332780 (9): Bad file descriptor 00:30:31.459 [2024-11-06 15:34:59.067676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331880 (9): Bad file descriptor 00:30:31.459 [2024-11-06 15:34:59.068762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:31.459 [2024-11-06 15:34:59.068795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:31.459 [2024-11-06 15:34:59.068807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:31.459 [2024-11-06 15:34:59.068818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:31.459 [2024-11-06 15:34:59.068828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:31.459 [2024-11-06 15:34:59.068881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.068896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.068913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.068923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.068936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.068946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.068958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.068968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.068982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.068993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.459 [2024-11-06 15:34:59.069375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.459 [2024-11-06 15:34:59.069385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.069987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.069996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.460 [2024-11-06 15:34:59.070292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.460 [2024-11-06 15:34:59.070302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.070314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.070326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.070349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.070359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000338e00 is same with the state(6) to be set 00:30:31.461 [2024-11-06 15:34:59.071677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.071696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.071713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.071724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.071737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.071761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.461 [2024-11-06 15:34:59.071772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.461 [2024-11-06 15:34:59.071785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.071978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.071994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.072004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.072016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.072027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.072040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.726 [2024-11-06 15:34:59.072051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.726 [2024-11-06 15:34:59.072063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.727 [2024-11-06 15:34:59.072980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.727 [2024-11-06 15:34:59.072992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.073191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.073206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000339300 is same with the state(6) to be set 00:30:31.728 [2024-11-06 15:34:59.074590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.074978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.074991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.728 [2024-11-06 15:34:59.075306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.728 [2024-11-06 15:34:59.075318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.075965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.076018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.076040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.076051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.076063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.076073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.076086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.729 [2024-11-06 15:34:59.076097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.729 [2024-11-06 15:34:59.076108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500033b600 is same with the state(6) to be set 00:30:31.729 [2024-11-06 15:34:59.077393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:31.729 [2024-11-06 15:34:59.077416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:31.729 [2024-11-06 15:34:59.077430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:31.729 [2024-11-06 15:34:59.077647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.729 [2024-11-06 15:34:59.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000334580 with addr=10.0.0.2, port=4420 00:30:31.729 [2024-11-06 15:34:59.077681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334580 is same with the state(6) to be set 00:30:31.729 [2024-11-06 15:34:59.078356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.729 [2024-11-06 15:34:59.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:30:31.729 [2024-11-06 15:34:59.078397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:30:31.729 [2024-11-06 15:34:59.078580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.729 [2024-11-06 15:34:59.078596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032fa80 with addr=10.0.0.2, port=4420 00:30:31.729 [2024-11-06 15:34:59.078607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032fa80 is same with the state(6) to be set 00:30:31.729 [2024-11-06 15:34:59.078709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.730 [2024-11-06 15:34:59.078724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000336380 with addr=10.0.0.2, port=4420 00:30:31.730 [2024-11-06 15:34:59.078735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000336380 is same with the state(6) to be set 00:30:31.730 [2024-11-06 15:34:59.078748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000334580 (9): Bad file descriptor 00:30:31.730 [2024-11-06 15:34:59.079988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:31.730 [2024-11-06 15:34:59.080016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:31.730 [2024-11-06 15:34:59.080028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:31.730 [2024-11-06 15:34:59.080080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032eb80 (9): Bad file descriptor 00:30:31.730 [2024-11-06 15:34:59.080100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032fa80 (9): Bad file descriptor 00:30:31.730 [2024-11-06 15:34:59.080113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000336380 (9): Bad file descriptor 00:30:31.730 [2024-11-06 15:34:59.080125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:31.730 [2024-11-06 15:34:59.080134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:31.730 [2024-11-06 15:34:59.080144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:31.730 [2024-11-06 15:34:59.080154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:31.730 [2024-11-06 15:34:59.080370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.080979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.080990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.730 [2024-11-06 15:34:59.081148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.730 [2024-11-06 15:34:59.081160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.081862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000339d00 is same with the state(6) to be set 00:30:31.731 [2024-11-06 15:34:59.083147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.083168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.083185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.731 [2024-11-06 15:34:59.083215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.731 [2024-11-06 15:34:59.083226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.083986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.083997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.732 [2024-11-06 15:34:59.084159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.732 [2024-11-06 15:34:59.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.084676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.084687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500033a200 is same with the state(6) to be set 00:30:31.733 [2024-11-06 15:34:59.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.733 [2024-11-06 15:34:59.086349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.733 [2024-11-06 15:34:59.086361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.086986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.086998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.734 [2024-11-06 15:34:59.087251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.734 [2024-11-06 15:34:59.087263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.735 [2024-11-06 15:34:59.087517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.735 [2024-11-06 15:34:59.087527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500033b100 is same with the state(6) to be set 00:30:31.735 [2024-11-06 15:34:59.088886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:31.735 [2024-11-06 15:34:59.088910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:31.735 task offset: 28544 on job bdev=Nvme4n1 fails 00:30:31.735 00:30:31.735 Latency(us) 00:30:31.735 [2024-11-06T14:34:59.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.735 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme1n1 ended in about 0.90 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme1n1 : 0.90 213.10 13.32 71.03 0.00 222833.62 10111.27 245666.38 00:30:31.735 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme2n1 ended in about 0.91 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme2n1 : 0.91 215.39 13.46 70.33 0.00 217467.89 29335.16 216705.71 00:30:31.735 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme3n1 ended in about 0.91 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme3n1 : 0.91 214.72 13.42 70.11 0.00 214014.41 16727.28 241671.80 00:30:31.735 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme4n1 ended in about 0.90 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme4n1 : 0.90 213.62 13.35 71.21 0.00 209646.93 7583.45 244667.73 00:30:31.735 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme5n1 ended in about 0.92 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme5n1 : 0.92 138.91 8.68 69.45 0.00 281563.59 19723.22 248662.31 00:30:31.735 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme6n1 ended in about 0.92 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme6n1 : 0.92 138.49 8.66 69.24 0.00 277057.67 21221.18 261644.68 00:30:31.735 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme7n1 ended in about 0.90 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme7n1 : 0.90 213.38 13.34 71.13 0.00 197411.35 16602.45 236678.58 00:30:31.735 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme8n1 ended in about 0.91 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme8n1 : 0.91 211.60 13.22 70.53 0.00 195113.45 18724.57 254654.17 00:30:31.735 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme9n1 ended in about 0.93 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme9n1 : 0.93 138.06 8.63 69.03 0.00 261212.32 21970.16 253655.53 00:30:31.735 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:31.735 Job: Nvme10n1 ended in about 0.92 seconds with error 00:30:31.735 Verification LBA range: start 0x0 length 0x400 00:30:31.735 Nvme10n1 : 0.92 139.78 8.74 69.89 0.00 251880.59 20472.20 265639.25 00:30:31.735 [2024-11-06T14:34:59.373Z] =================================================================================================================== 00:30:31.735 [2024-11-06T14:34:59.373Z] Total : 1837.03 114.81 701.96 0.00 228873.65 7583.45 265639.25 00:30:31.735 [2024-11-06 15:34:59.222941] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:31.735 [2024-11-06 15:34:59.223005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:30:31.735 [2024-11-06 15:34:59.223270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.735 [2024-11-06 15:34:59.223298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:30:31.735 [2024-11-06 15:34:59.223314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:30:31.735 [2024-11-06 15:34:59.223481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.735 [2024-11-06 15:34:59.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330980 with addr=10.0.0.2, port=4420 00:30:31.735 [2024-11-06 15:34:59.223509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000330980 is same with the state(6) to be set 00:30:31.735 [2024-11-06 15:34:59.223689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.735 [2024-11-06 15:34:59.223704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000333680 with addr=10.0.0.2, port=4420 00:30:31.735 [2024-11-06 15:34:59.223721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000333680 is same with the state(6) to be set 00:30:31.735 [2024-11-06 15:34:59.223733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:31.735 [2024-11-06 15:34:59.223744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:31.735 [2024-11-06 15:34:59.223757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:31.735 [2024-11-06 15:34:59.223772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:31.735 [2024-11-06 15:34:59.223785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:31.735 [2024-11-06 15:34:59.223794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:31.735 [2024-11-06 15:34:59.223804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:31.735 [2024-11-06 15:34:59.223814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:31.735 [2024-11-06 15:34:59.223824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.223833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.223842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.223850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.224030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:31.736 [2024-11-06 15:34:59.224254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.224276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000331880 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.224288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000331880 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.224442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.224458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000332780 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.224469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000332780 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.224603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.224619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000335480 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.224631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000335480 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.224648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.224665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000330980 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.224679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000333680 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.224698] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.224714] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.224728] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.224753] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.224769] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.224782] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:30:31.736 [2024-11-06 15:34:59.225991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:31.736 [2024-11-06 15:34:59.226021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:31.736 [2024-11-06 15:34:59.226035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:31.736 [2024-11-06 15:34:59.226245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000334580 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.226279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000334580 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.226294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000331880 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.226308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332780 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.226322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000335480 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.226333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.226343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.226354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.226364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.226375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.226384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.226393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.226403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.226413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.226421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.226430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.226439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.226813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.226835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000336380 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.226847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000336380 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.226945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.226964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032fa80 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.226976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032fa80 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.227084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.736 [2024-11-06 15:34:59.227100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:30:31.736 [2024-11-06 15:34:59.227111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:30:31.736 [2024-11-06 15:34:59.227123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000334580 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.227135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000336380 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.227334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032fa80 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.227347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032eb80 (9): Bad file descriptor 00:30:31.736 [2024-11-06 15:34:59.227365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:31.736 [2024-11-06 15:34:59.227505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:31.736 [2024-11-06 15:34:59.227513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:31.736 [2024-11-06 15:34:59.227523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:31.736 [2024-11-06 15:34:59.227531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:35.027 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3983177 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3983177 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3983177 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.595 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.596 rmmod nvme_tcp 00:30:35.596 rmmod nvme_fabrics 00:30:35.596 rmmod nvme_keyring 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3982887 ']' 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3982887 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3982887 ']' 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3982887 00:30:35.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3982887) - No such process 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3982887 is not found' 00:30:35.596 Process with pid 3982887 is not found 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.596 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.133 00:30:38.133 real 0m11.622s 00:30:38.133 user 0m34.275s 00:30:38.133 sys 0m1.673s 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:38.133 ************************************ 00:30:38.133 END TEST nvmf_shutdown_tc3 00:30:38.133 ************************************ 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:38.133 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:38.134 ************************************ 00:30:38.134 START TEST nvmf_shutdown_tc4 00:30:38.134 ************************************ 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.134 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.134 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.134 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:30:38.135 00:30:38.135 --- 10.0.0.2 ping statistics --- 00:30:38.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.135 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:38.135 00:30:38.135 --- 10.0.0.1 ping statistics --- 00:30:38.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.135 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3984890 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3984890 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3984890 ']' 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:38.135 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:38.135 [2024-11-06 15:35:05.726139] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:38.135 [2024-11-06 15:35:05.726232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.395 [2024-11-06 15:35:05.859758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:38.395 [2024-11-06 15:35:05.975783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.395 [2024-11-06 15:35:05.975827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.395 [2024-11-06 15:35:05.975838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.395 [2024-11-06 15:35:05.975865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.395 [2024-11-06 15:35:05.975874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.395 [2024-11-06 15:35:05.978323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.395 [2024-11-06 15:35:05.978431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.395 [2024-11-06 15:35:05.978523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.395 [2024-11-06 15:35:05.978545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.963 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:38.963 [2024-11-06 15:35:06.580634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.222 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:39.222 Malloc1 00:30:39.222 [2024-11-06 15:35:06.749256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.222 Malloc2 00:30:39.481 Malloc3 00:30:39.481 Malloc4 00:30:39.740 Malloc5 00:30:39.740 Malloc6 00:30:39.740 Malloc7 00:30:39.998 Malloc8 00:30:39.998 Malloc9 00:30:39.998 Malloc10 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3985226 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:40.257 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:40.257 [2024-11-06 15:35:07.798457] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3984890 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3984890 ']' 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3984890 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3984890 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3984890' 00:30:45.537 killing process with pid 3984890 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3984890 00:30:45.537 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3984890 00:30:45.537 [2024-11-06 15:35:12.752907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.752966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.752977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e880 is same with the state(6) to be set 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 [2024-11-06 15:35:12.754487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.537 [2024-11-06 15:35:12.754531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 [2024-11-06 15:35:12.754628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 [2024-11-06 15:35:12.756302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.537 NVMe io qpair process completion error 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 Write completed with error (sct=0, sc=8) 00:30:45.537 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 [2024-11-06 15:35:12.757784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.538 starting I/O failed: -6 00:30:45.538 starting I/O failed: -6 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 [2024-11-06 15:35:12.759676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 [2024-11-06 15:35:12.762261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.538 starting I/O failed: -6 00:30:45.538 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 [2024-11-06 15:35:12.772586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.539 NVMe io qpair process completion error 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 [2024-11-06 15:35:12.774139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 [2024-11-06 15:35:12.775978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.539 Write completed with error (sct=0, sc=8) 00:30:45.539 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 [2024-11-06 15:35:12.778454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 [2024-11-06 15:35:12.792592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.540 NVMe io qpair process completion error 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 starting I/O failed: -6 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.540 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 [2024-11-06 15:35:12.794065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 [2024-11-06 15:35:12.795965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 [2024-11-06 15:35:12.798508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.541 Write completed with error (sct=0, sc=8) 00:30:45.541 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 [2024-11-06 15:35:12.813466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.542 NVMe io qpair process completion error 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 [2024-11-06 15:35:12.814973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.542 starting I/O failed: -6 00:30:45.542 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 [2024-11-06 15:35:12.816717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 [2024-11-06 15:35:12.819299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 [2024-11-06 15:35:12.832926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.543 NVMe io qpair process completion error 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 starting I/O failed: -6 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.543 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 [2024-11-06 15:35:12.834471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 [2024-11-06 15:35:12.836078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 [2024-11-06 15:35:12.838536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.544 Write completed with error (sct=0, sc=8) 00:30:45.544 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 [2024-11-06 15:35:12.848882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.545 NVMe io qpair process completion error 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 [2024-11-06 15:35:12.850501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 [2024-11-06 15:35:12.852064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.545 starting I/O failed: -6 00:30:45.545 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 [2024-11-06 15:35:12.854526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 Write completed with error (sct=0, sc=8) 00:30:45.546 starting I/O failed: -6 00:30:45.546 [2024-11-06 15:35:12.869889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.547 NVMe io qpair process completion error 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 [2024-11-06 15:35:12.871584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 [2024-11-06 15:35:12.873480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 [2024-11-06 15:35:12.875783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.547 Write completed with error (sct=0, sc=8) 00:30:45.547 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 [2024-11-06 15:35:12.889597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.548 NVMe io qpair process completion error 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 [2024-11-06 15:35:12.891083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.548 starting I/O failed: -6 00:30:45.548 starting I/O failed: -6 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 [2024-11-06 15:35:12.892990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.548 Write completed with error (sct=0, sc=8) 00:30:45.548 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 [2024-11-06 15:35:12.895419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 [2024-11-06 15:35:12.909225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.549 NVMe io qpair process completion error 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 starting I/O failed: -6 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.549 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 [2024-11-06 15:35:12.910791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 [2024-11-06 15:35:12.912507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 [2024-11-06 15:35:12.915027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.550 Write completed with error (sct=0, sc=8) 00:30:45.550 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 starting I/O failed: -6 00:30:45.551 [2024-11-06 15:35:12.928979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:45.551 NVMe io qpair process completion error 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 Write completed with error (sct=0, sc=8) 00:30:45.551 [2024-11-06 15:35:12.944801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:45.551 NVMe io qpair process completion error 00:30:45.551 Initializing NVMe Controllers 00:30:45.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.551 Controller IO queue size 128, less than required. 00:30:45.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:45.551 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:45.552 Controller IO queue size 128, less than required. 00:30:45.552 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:45.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:45.552 Initialization complete. Launching workers. 00:30:45.552 ======================================================== 00:30:45.552 Latency(us) 00:30:45.552 Device Information : IOPS MiB/s Average min max 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1876.90 80.65 68207.72 1510.44 164370.58 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1843.44 79.21 69563.73 1861.98 160089.55 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1800.09 77.35 71408.31 1152.84 151306.05 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1866.11 80.18 69059.09 1821.02 143894.62 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1911.01 82.11 67588.98 1323.47 151600.12 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1876.46 80.63 68940.40 1212.03 190684.66 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1867.87 80.26 69450.13 1945.50 208869.90 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1881.52 80.85 69098.21 1140.00 203541.09 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1872.94 80.48 69579.90 1605.78 220092.23 00:30:45.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1852.47 79.60 69848.89 1340.20 235756.31 00:30:45.552 ======================================================== 00:30:45.552 Total : 18648.81 801.32 69261.33 1140.00 235756.31 00:30:45.552 00:30:45.552 [2024-11-06 15:35:12.977513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000022400 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020d80 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000021500 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000021c80 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:30:45.552 [2024-11-06 15:35:12.977891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:30:45.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:48.840 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3985226 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3985226 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3985226 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.409 rmmod nvme_tcp 00:30:49.409 rmmod nvme_fabrics 00:30:49.409 rmmod nvme_keyring 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3984890 ']' 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3984890 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3984890 ']' 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3984890 00:30:49.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3984890) - No such process 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3984890 is not found' 00:30:49.409 Process with pid 3984890 is not found 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.409 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.410 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.410 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.410 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.945 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.945 00:30:51.945 real 0m13.662s 00:30:51.945 user 0m39.227s 00:30:51.945 sys 0m4.803s 00:30:51.945 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.945 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:51.945 ************************************ 00:30:51.945 END TEST nvmf_shutdown_tc4 00:30:51.945 ************************************ 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:51.945 00:30:51.945 real 0m58.694s 00:30:51.945 user 2m48.985s 00:30:51.945 sys 0m14.744s 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.945 ************************************ 00:30:51.945 END TEST nvmf_shutdown 00:30:51.945 ************************************ 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:51.945 ************************************ 00:30:51.945 START TEST nvmf_nsid 00:30:51.945 ************************************ 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:51.945 * Looking for test storage... 00:30:51.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.945 --rc genhtml_branch_coverage=1 00:30:51.945 --rc genhtml_function_coverage=1 00:30:51.945 --rc genhtml_legend=1 00:30:51.945 --rc geninfo_all_blocks=1 00:30:51.945 --rc geninfo_unexecuted_blocks=1 00:30:51.945 00:30:51.945 ' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.945 --rc genhtml_branch_coverage=1 00:30:51.945 --rc genhtml_function_coverage=1 00:30:51.945 --rc genhtml_legend=1 00:30:51.945 --rc geninfo_all_blocks=1 00:30:51.945 --rc geninfo_unexecuted_blocks=1 00:30:51.945 00:30:51.945 ' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.945 --rc genhtml_branch_coverage=1 00:30:51.945 --rc genhtml_function_coverage=1 00:30:51.945 --rc genhtml_legend=1 00:30:51.945 --rc geninfo_all_blocks=1 00:30:51.945 --rc geninfo_unexecuted_blocks=1 00:30:51.945 00:30:51.945 ' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.945 --rc genhtml_branch_coverage=1 00:30:51.945 --rc genhtml_function_coverage=1 00:30:51.945 --rc genhtml_legend=1 00:30:51.945 --rc geninfo_all_blocks=1 00:30:51.945 --rc geninfo_unexecuted_blocks=1 00:30:51.945 00:30:51.945 ' 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:51.945 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.946 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.520 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:58.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:58.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:58.521 Found net devices under 0000:86:00.0: cvl_0_0 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:58.521 Found net devices under 0000:86:00.1: cvl_0_1 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.521 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:30:58.521 00:30:58.521 --- 10.0.0.2 ping statistics --- 00:30:58.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.521 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:30:58.521 00:30:58.521 --- 10.0.0.1 ping statistics --- 00:30:58.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.521 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3990105 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3990105 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3990105 ']' 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.521 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 [2024-11-06 15:35:25.308229] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:58.522 [2024-11-06 15:35:25.308325] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.522 [2024-11-06 15:35:25.440902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.522 [2024-11-06 15:35:25.553209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.522 [2024-11-06 15:35:25.553253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.522 [2024-11-06 15:35:25.553265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.522 [2024-11-06 15:35:25.553276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.522 [2024-11-06 15:35:25.553285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.522 [2024-11-06 15:35:25.554701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3990334 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=21e458ee-3e12-43c6-b68d-c0293a0bd3ea 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f3a1e15e-1745-48d8-a429-49206efc6ce1 00:30:58.522 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=81cb17d2-bac2-4dd5-8566-0401cfc1aba5 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.803 null0 00:30:58.803 null1 00:30:58.803 null2 00:30:58.803 [2024-11-06 15:35:26.186143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.803 [2024-11-06 15:35:26.210380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.803 [2024-11-06 15:35:26.216040] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:58.803 [2024-11-06 15:35:26.216111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990334 ] 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3990334 /var/tmp/tgt2.sock 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3990334 ']' 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:58.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.803 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:58.803 [2024-11-06 15:35:26.341350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.089 [2024-11-06 15:35:26.452489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.677 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:59.677 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:59.677 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:59.936 [2024-11-06 15:35:27.557299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.195 [2024-11-06 15:35:27.573473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:31:00.195 nvme0n1 nvme0n2 00:31:00.195 nvme1n1 00:31:00.195 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:31:00.195 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:31:00.195 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:31:01.132 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 21e458ee-3e12-43c6-b68d-c0293a0bd3ea 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=21e458ee3e1243c6b68dc0293a0bd3ea 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 21E458EE3E1243C6B68DC0293A0BD3EA 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 21E458EE3E1243C6B68DC0293A0BD3EA == \2\1\E\4\5\8\E\E\3\E\1\2\4\3\C\6\B\6\8\D\C\0\2\9\3\A\0\B\D\3\E\A ]] 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:02.510 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f3a1e15e-1745-48d8-a429-49206efc6ce1 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f3a1e15e174548d8a42949206efc6ce1 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F3A1E15E174548D8A42949206EFC6CE1 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F3A1E15E174548D8A42949206EFC6CE1 == \F\3\A\1\E\1\5\E\1\7\4\5\4\8\D\8\A\4\2\9\4\9\2\0\6\E\F\C\6\C\E\1 ]] 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 81cb17d2-bac2-4dd5-8566-0401cfc1aba5 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=81cb17d2bac24dd585660401cfc1aba5 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 81CB17D2BAC24DD585660401CFC1ABA5 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 81CB17D2BAC24DD585660401CFC1ABA5 == \8\1\C\B\1\7\D\2\B\A\C\2\4\D\D\5\8\5\6\6\0\4\0\1\C\F\C\1\A\B\A\5 ]] 00:31:02.511 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3990334 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3990334 ']' 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3990334 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3990334 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3990334' 00:31:02.768 killing process with pid 3990334 00:31:02.768 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3990334 00:31:02.769 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3990334 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.303 rmmod nvme_tcp 00:31:05.303 rmmod nvme_fabrics 00:31:05.303 rmmod nvme_keyring 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3990105 ']' 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3990105 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3990105 ']' 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3990105 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3990105 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3990105' 00:31:05.303 killing process with pid 3990105 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3990105 00:31:05.303 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3990105 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.240 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.776 00:31:08.776 real 0m16.778s 00:31:08.776 user 0m16.897s 00:31:08.776 sys 0m5.837s 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:31:08.776 ************************************ 00:31:08.776 END TEST nvmf_nsid 00:31:08.776 ************************************ 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:08.776 00:31:08.776 real 18m54.123s 00:31:08.776 user 49m50.731s 00:31:08.776 sys 4m12.396s 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:08.776 15:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:08.776 ************************************ 00:31:08.776 END TEST nvmf_target_extra 00:31:08.776 ************************************ 00:31:08.776 15:35:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:08.776 15:35:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:08.776 15:35:35 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:08.776 15:35:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.776 ************************************ 00:31:08.776 START TEST nvmf_host 00:31:08.776 ************************************ 00:31:08.776 15:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:08.776 * Looking for test storage... 00:31:08.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:31:08.776 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.777 --rc genhtml_branch_coverage=1 00:31:08.777 --rc genhtml_function_coverage=1 00:31:08.777 --rc genhtml_legend=1 00:31:08.777 --rc geninfo_all_blocks=1 00:31:08.777 --rc geninfo_unexecuted_blocks=1 00:31:08.777 00:31:08.777 ' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.777 --rc genhtml_branch_coverage=1 00:31:08.777 --rc genhtml_function_coverage=1 00:31:08.777 --rc genhtml_legend=1 00:31:08.777 --rc geninfo_all_blocks=1 00:31:08.777 --rc geninfo_unexecuted_blocks=1 00:31:08.777 00:31:08.777 ' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.777 --rc genhtml_branch_coverage=1 00:31:08.777 --rc genhtml_function_coverage=1 00:31:08.777 --rc genhtml_legend=1 00:31:08.777 --rc geninfo_all_blocks=1 00:31:08.777 --rc geninfo_unexecuted_blocks=1 00:31:08.777 00:31:08.777 ' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:08.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.777 --rc genhtml_branch_coverage=1 00:31:08.777 --rc genhtml_function_coverage=1 00:31:08.777 --rc genhtml_legend=1 00:31:08.777 --rc geninfo_all_blocks=1 00:31:08.777 --rc geninfo_unexecuted_blocks=1 00:31:08.777 00:31:08.777 ' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.777 ************************************ 00:31:08.777 START TEST nvmf_multicontroller 00:31:08.777 ************************************ 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:08.777 * Looking for test storage... 00:31:08.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.777 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.778 --rc genhtml_branch_coverage=1 00:31:08.778 --rc genhtml_function_coverage=1 00:31:08.778 --rc genhtml_legend=1 00:31:08.778 --rc geninfo_all_blocks=1 00:31:08.778 --rc geninfo_unexecuted_blocks=1 00:31:08.778 00:31:08.778 ' 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.778 --rc genhtml_branch_coverage=1 00:31:08.778 --rc genhtml_function_coverage=1 00:31:08.778 --rc genhtml_legend=1 00:31:08.778 --rc geninfo_all_blocks=1 00:31:08.778 --rc geninfo_unexecuted_blocks=1 00:31:08.778 00:31:08.778 ' 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.778 --rc genhtml_branch_coverage=1 00:31:08.778 --rc genhtml_function_coverage=1 00:31:08.778 --rc genhtml_legend=1 00:31:08.778 --rc geninfo_all_blocks=1 00:31:08.778 --rc geninfo_unexecuted_blocks=1 00:31:08.778 00:31:08.778 ' 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.778 --rc genhtml_branch_coverage=1 00:31:08.778 --rc genhtml_function_coverage=1 00:31:08.778 --rc genhtml_legend=1 00:31:08.778 --rc geninfo_all_blocks=1 00:31:08.778 --rc geninfo_unexecuted_blocks=1 00:31:08.778 00:31:08.778 ' 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.778 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.037 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:09.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.038 15:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.608 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.608 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.608 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.608 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:15.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:15.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:15.609 Found net devices under 0000:86:00.0: cvl_0_0 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:15.609 Found net devices under 0000:86:00.1: cvl_0_1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:31:15.609 00:31:15.609 --- 10.0.0.2 ping statistics --- 00:31:15.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.609 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:15.609 00:31:15.609 --- 10.0.0.1 ping statistics --- 00:31:15.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.609 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.609 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3995118 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3995118 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3995118 ']' 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:15.610 15:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.610 [2024-11-06 15:35:42.437287] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:15.610 [2024-11-06 15:35:42.437377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.610 [2024-11-06 15:35:42.566368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:15.610 [2024-11-06 15:35:42.678744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.610 [2024-11-06 15:35:42.678791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.610 [2024-11-06 15:35:42.678803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.610 [2024-11-06 15:35:42.678813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.610 [2024-11-06 15:35:42.678822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.610 [2024-11-06 15:35:42.681286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.610 [2024-11-06 15:35:42.681353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.610 [2024-11-06 15:35:42.681375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.610 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:15.610 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:31:15.610 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.610 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.610 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 [2024-11-06 15:35:43.280928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 Malloc0 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 [2024-11-06 15:35:43.397997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 [2024-11-06 15:35:43.405901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 Malloc1 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.869 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3995365 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3995365 /var/tmp/bdevperf.sock 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3995365 ']' 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.128 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:16.129 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.129 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:16.129 15:35:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.065 NVMe0n1 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.065 1 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.065 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.065 request: 00:31:17.065 { 00:31:17.065 "name": "NVMe0", 00:31:17.065 "trtype": "tcp", 00:31:17.065 "traddr": "10.0.0.2", 00:31:17.065 "adrfam": "ipv4", 00:31:17.065 "trsvcid": "4420", 00:31:17.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.065 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:17.065 "hostaddr": "10.0.0.1", 00:31:17.065 "prchk_reftag": false, 00:31:17.065 "prchk_guard": false, 00:31:17.065 "hdgst": false, 00:31:17.065 "ddgst": false, 00:31:17.065 "allow_unrecognized_csi": false, 00:31:17.065 "method": "bdev_nvme_attach_controller", 00:31:17.065 "req_id": 1 00:31:17.065 } 00:31:17.065 Got JSON-RPC error response 00:31:17.065 response: 00:31:17.065 { 00:31:17.065 "code": -114, 00:31:17.066 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.066 } 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.066 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.325 request: 00:31:17.325 { 00:31:17.325 "name": "NVMe0", 00:31:17.325 "trtype": "tcp", 00:31:17.325 "traddr": "10.0.0.2", 00:31:17.325 "adrfam": "ipv4", 00:31:17.325 "trsvcid": "4420", 00:31:17.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.325 "hostaddr": "10.0.0.1", 00:31:17.325 "prchk_reftag": false, 00:31:17.325 "prchk_guard": false, 00:31:17.325 "hdgst": false, 00:31:17.325 "ddgst": false, 00:31:17.325 "allow_unrecognized_csi": false, 00:31:17.325 "method": "bdev_nvme_attach_controller", 00:31:17.325 "req_id": 1 00:31:17.325 } 00:31:17.325 Got JSON-RPC error response 00:31:17.325 response: 00:31:17.325 { 00:31:17.325 "code": -114, 00:31:17.325 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.325 } 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.325 request: 00:31:17.325 { 00:31:17.325 "name": "NVMe0", 00:31:17.325 "trtype": "tcp", 00:31:17.325 "traddr": "10.0.0.2", 00:31:17.325 "adrfam": "ipv4", 00:31:17.325 "trsvcid": "4420", 00:31:17.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.325 "hostaddr": "10.0.0.1", 00:31:17.325 "prchk_reftag": false, 00:31:17.325 "prchk_guard": false, 00:31:17.325 "hdgst": false, 00:31:17.325 "ddgst": false, 00:31:17.325 "multipath": "disable", 00:31:17.325 "allow_unrecognized_csi": false, 00:31:17.325 "method": "bdev_nvme_attach_controller", 00:31:17.325 "req_id": 1 00:31:17.325 } 00:31:17.325 Got JSON-RPC error response 00:31:17.325 response: 00:31:17.325 { 00:31:17.325 "code": -114, 00:31:17.325 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:17.325 } 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.325 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.325 request: 00:31:17.325 { 00:31:17.325 "name": "NVMe0", 00:31:17.325 "trtype": "tcp", 00:31:17.325 "traddr": "10.0.0.2", 00:31:17.325 "adrfam": "ipv4", 00:31:17.325 "trsvcid": "4420", 00:31:17.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.325 "hostaddr": "10.0.0.1", 00:31:17.325 "prchk_reftag": false, 00:31:17.325 "prchk_guard": false, 00:31:17.325 "hdgst": false, 00:31:17.325 "ddgst": false, 00:31:17.325 "multipath": "failover", 00:31:17.325 "allow_unrecognized_csi": false, 00:31:17.326 "method": "bdev_nvme_attach_controller", 00:31:17.326 "req_id": 1 00:31:17.326 } 00:31:17.326 Got JSON-RPC error response 00:31:17.326 response: 00:31:17.326 { 00:31:17.326 "code": -114, 00:31:17.326 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:17.326 } 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.326 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.585 NVMe0n1 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.585 15:35:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.585 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:17.585 15:35:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:18.961 { 00:31:18.961 "results": [ 00:31:18.961 { 00:31:18.961 "job": "NVMe0n1", 00:31:18.961 "core_mask": "0x1", 00:31:18.961 "workload": "write", 00:31:18.961 "status": "finished", 00:31:18.961 "queue_depth": 128, 00:31:18.961 "io_size": 4096, 00:31:18.961 "runtime": 1.006339, 00:31:18.961 "iops": 21394.38101872232, 00:31:18.962 "mibps": 83.57180085438407, 00:31:18.962 "io_failed": 0, 00:31:18.962 "io_timeout": 0, 00:31:18.962 "avg_latency_us": 5968.2382691703715, 00:31:18.962 "min_latency_us": 3245.592380952381, 00:31:18.962 "max_latency_us": 13356.860952380952 00:31:18.962 } 00:31:18.962 ], 00:31:18.962 "core_count": 1 00:31:18.962 } 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3995365 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3995365 ']' 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3995365 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3995365 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3995365' 00:31:18.962 killing process with pid 3995365 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3995365 00:31:18.962 15:35:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3995365 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:19.899 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:19.899 [2024-11-06 15:35:43.592537] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:19.899 [2024-11-06 15:35:43.592645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995365 ] 00:31:19.899 [2024-11-06 15:35:43.718878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.899 [2024-11-06 15:35:43.833373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.899 [2024-11-06 15:35:45.101310] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name cf92ee6c-789c-4570-ab1d-2d2c07487a94 already exists 00:31:19.899 [2024-11-06 15:35:45.101349] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:cf92ee6c-789c-4570-ab1d-2d2c07487a94 alias for bdev NVMe1n1 00:31:19.899 [2024-11-06 15:35:45.101369] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:19.899 Running I/O for 1 seconds... 00:31:19.899 21339.00 IOPS, 83.36 MiB/s 00:31:19.899 Latency(us) 00:31:19.899 [2024-11-06T14:35:47.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.899 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:19.899 NVMe0n1 : 1.01 21394.38 83.57 0.00 0.00 5968.24 3245.59 13356.86 00:31:19.899 [2024-11-06T14:35:47.537Z] =================================================================================================================== 00:31:19.899 [2024-11-06T14:35:47.537Z] Total : 21394.38 83.57 0.00 0.00 5968.24 3245.59 13356.86 00:31:19.899 Received shutdown signal, test time was about 1.000000 seconds 00:31:19.899 00:31:19.899 Latency(us) 00:31:19.899 [2024-11-06T14:35:47.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.899 [2024-11-06T14:35:47.537Z] =================================================================================================================== 00:31:19.899 [2024-11-06T14:35:47.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.899 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.899 rmmod nvme_tcp 00:31:19.899 rmmod nvme_fabrics 00:31:19.899 rmmod nvme_keyring 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3995118 ']' 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3995118 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3995118 ']' 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3995118 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3995118 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:19.899 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3995118' 00:31:19.899 killing process with pid 3995118 00:31:19.900 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3995118 00:31:19.900 15:35:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3995118 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.277 15:35:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.813 00:31:23.813 real 0m14.694s 00:31:23.813 user 0m23.658s 00:31:23.813 sys 0m5.351s 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:23.813 ************************************ 00:31:23.813 END TEST nvmf_multicontroller 00:31:23.813 ************************************ 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.813 ************************************ 00:31:23.813 START TEST nvmf_aer 00:31:23.813 ************************************ 00:31:23.813 15:35:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:23.813 * Looking for test storage... 00:31:23.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.813 --rc genhtml_branch_coverage=1 00:31:23.813 --rc genhtml_function_coverage=1 00:31:23.813 --rc genhtml_legend=1 00:31:23.813 --rc geninfo_all_blocks=1 00:31:23.813 --rc geninfo_unexecuted_blocks=1 00:31:23.813 00:31:23.813 ' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.813 --rc genhtml_branch_coverage=1 00:31:23.813 --rc genhtml_function_coverage=1 00:31:23.813 --rc genhtml_legend=1 00:31:23.813 --rc geninfo_all_blocks=1 00:31:23.813 --rc geninfo_unexecuted_blocks=1 00:31:23.813 00:31:23.813 ' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.813 --rc genhtml_branch_coverage=1 00:31:23.813 --rc genhtml_function_coverage=1 00:31:23.813 --rc genhtml_legend=1 00:31:23.813 --rc geninfo_all_blocks=1 00:31:23.813 --rc geninfo_unexecuted_blocks=1 00:31:23.813 00:31:23.813 ' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:23.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.813 --rc genhtml_branch_coverage=1 00:31:23.813 --rc genhtml_function_coverage=1 00:31:23.813 --rc genhtml_legend=1 00:31:23.813 --rc geninfo_all_blocks=1 00:31:23.813 --rc geninfo_unexecuted_blocks=1 00:31:23.813 00:31:23.813 ' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.813 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.814 15:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:30.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:30.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.382 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:30.383 Found net devices under 0000:86:00.0: cvl_0_0 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:30.383 Found net devices under 0000:86:00.1: cvl_0_1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.383 15:35:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:31:30.383 00:31:30.383 --- 10.0.0.2 ping statistics --- 00:31:30.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.383 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:31:30.383 00:31:30.383 --- 10.0.0.1 ping statistics --- 00:31:30.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.383 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3999596 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3999596 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3999596 ']' 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:30.383 15:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.383 [2024-11-06 15:35:57.206405] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:30.383 [2024-11-06 15:35:57.206493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.383 [2024-11-06 15:35:57.338055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.383 [2024-11-06 15:35:57.449627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.383 [2024-11-06 15:35:57.449668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.383 [2024-11-06 15:35:57.449679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.383 [2024-11-06 15:35:57.449690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.383 [2024-11-06 15:35:57.449698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.383 [2024-11-06 15:35:57.452279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.383 [2024-11-06 15:35:57.452403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.383 [2024-11-06 15:35:57.452469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.383 [2024-11-06 15:35:57.452491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.642 [2024-11-06 15:35:58.063497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.642 Malloc0 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.642 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.643 [2024-11-06 15:35:58.198494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:30.643 [ 00:31:30.643 { 00:31:30.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:30.643 "subtype": "Discovery", 00:31:30.643 "listen_addresses": [], 00:31:30.643 "allow_any_host": true, 00:31:30.643 "hosts": [] 00:31:30.643 }, 00:31:30.643 { 00:31:30.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.643 "subtype": "NVMe", 00:31:30.643 "listen_addresses": [ 00:31:30.643 { 00:31:30.643 "trtype": "TCP", 00:31:30.643 "adrfam": "IPv4", 00:31:30.643 "traddr": "10.0.0.2", 00:31:30.643 "trsvcid": "4420" 00:31:30.643 } 00:31:30.643 ], 00:31:30.643 "allow_any_host": true, 00:31:30.643 "hosts": [], 00:31:30.643 "serial_number": "SPDK00000000000001", 00:31:30.643 "model_number": "SPDK bdev Controller", 00:31:30.643 "max_namespaces": 2, 00:31:30.643 "min_cntlid": 1, 00:31:30.643 "max_cntlid": 65519, 00:31:30.643 "namespaces": [ 00:31:30.643 { 00:31:30.643 "nsid": 1, 00:31:30.643 "bdev_name": "Malloc0", 00:31:30.643 "name": "Malloc0", 00:31:30.643 "nguid": "EB55F92D1D884B3589A5078C551B2C09", 00:31:30.643 "uuid": "eb55f92d-1d88-4b35-89a5-078c551b2c09" 00:31:30.643 } 00:31:30.643 ] 00:31:30.643 } 00:31:30.643 ] 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3999843 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:31:30.643 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 3 -lt 200 ']' 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=4 00:31:30.902 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.160 Malloc1 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:31.160 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.419 [ 00:31:31.419 { 00:31:31.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:31.419 "subtype": "Discovery", 00:31:31.419 "listen_addresses": [], 00:31:31.419 "allow_any_host": true, 00:31:31.419 "hosts": [] 00:31:31.419 }, 00:31:31.419 { 00:31:31.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.419 "subtype": "NVMe", 00:31:31.419 "listen_addresses": [ 00:31:31.419 { 00:31:31.419 "trtype": "TCP", 00:31:31.419 "adrfam": "IPv4", 00:31:31.419 "traddr": "10.0.0.2", 00:31:31.419 "trsvcid": "4420" 00:31:31.419 } 00:31:31.419 ], 00:31:31.419 "allow_any_host": true, 00:31:31.419 "hosts": [], 00:31:31.419 "serial_number": "SPDK00000000000001", 00:31:31.419 "model_number": "SPDK bdev Controller", 00:31:31.419 "max_namespaces": 2, 00:31:31.419 "min_cntlid": 1, 00:31:31.419 "max_cntlid": 65519, 00:31:31.419 "namespaces": [ 00:31:31.419 { 00:31:31.419 "nsid": 1, 00:31:31.419 "bdev_name": "Malloc0", 00:31:31.419 "name": "Malloc0", 00:31:31.419 "nguid": "EB55F92D1D884B3589A5078C551B2C09", 00:31:31.419 "uuid": "eb55f92d-1d88-4b35-89a5-078c551b2c09" 00:31:31.419 }, 00:31:31.419 { 00:31:31.419 "nsid": 2, 00:31:31.419 "bdev_name": "Malloc1", 00:31:31.419 "name": "Malloc1", 00:31:31.419 "nguid": "87ECC45B0441458384F5F560DC50C572", 00:31:31.419 "uuid": "87ecc45b-0441-4583-84f5-f560dc50c572" 00:31:31.419 } 00:31:31.419 ] 00:31:31.419 } 00:31:31.419 ] 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3999843 00:31:31.419 Asynchronous Event Request test 00:31:31.419 Attaching to 10.0.0.2 00:31:31.419 Attached to 10.0.0.2 00:31:31.419 Registering asynchronous event callbacks... 00:31:31.419 Starting namespace attribute notice tests for all controllers... 00:31:31.419 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:31.419 aer_cb - Changed Namespace 00:31:31.419 Cleaning up... 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.419 15:35:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:31.678 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:31.678 rmmod nvme_tcp 00:31:31.678 rmmod nvme_fabrics 00:31:31.678 rmmod nvme_keyring 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3999596 ']' 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3999596 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3999596 ']' 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3999596 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3999596 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3999596' 00:31:31.937 killing process with pid 3999596 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3999596 00:31:31.937 15:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3999596 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.313 15:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.218 00:31:35.218 real 0m11.624s 00:31:35.218 user 0m12.995s 00:31:35.218 sys 0m5.133s 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:35.218 ************************************ 00:31:35.218 END TEST nvmf_aer 00:31:35.218 ************************************ 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.218 ************************************ 00:31:35.218 START TEST nvmf_async_init 00:31:35.218 ************************************ 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:35.218 * Looking for test storage... 00:31:35.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.218 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.477 --rc genhtml_branch_coverage=1 00:31:35.477 --rc genhtml_function_coverage=1 00:31:35.477 --rc genhtml_legend=1 00:31:35.477 --rc geninfo_all_blocks=1 00:31:35.477 --rc geninfo_unexecuted_blocks=1 00:31:35.477 00:31:35.477 ' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.477 --rc genhtml_branch_coverage=1 00:31:35.477 --rc genhtml_function_coverage=1 00:31:35.477 --rc genhtml_legend=1 00:31:35.477 --rc geninfo_all_blocks=1 00:31:35.477 --rc geninfo_unexecuted_blocks=1 00:31:35.477 00:31:35.477 ' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.477 --rc genhtml_branch_coverage=1 00:31:35.477 --rc genhtml_function_coverage=1 00:31:35.477 --rc genhtml_legend=1 00:31:35.477 --rc geninfo_all_blocks=1 00:31:35.477 --rc geninfo_unexecuted_blocks=1 00:31:35.477 00:31:35.477 ' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:35.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.477 --rc genhtml_branch_coverage=1 00:31:35.477 --rc genhtml_function_coverage=1 00:31:35.477 --rc genhtml_legend=1 00:31:35.477 --rc geninfo_all_blocks=1 00:31:35.477 --rc geninfo_unexecuted_blocks=1 00:31:35.477 00:31:35.477 ' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.477 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:35.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0c1ffaae6ccb46f59cfe97b5770d39fe 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.478 15:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:42.041 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:42.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:42.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:42.042 Found net devices under 0000:86:00.0: cvl_0_0 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:42.042 Found net devices under 0000:86:00.1: cvl_0_1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:31:42.042 00:31:42.042 --- 10.0.0.2 ping statistics --- 00:31:42.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.042 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:42.042 00:31:42.042 --- 10.0.0.1 ping statistics --- 00:31:42.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.042 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:42.042 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4003903 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4003903 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 4003903 ']' 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:42.043 15:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.043 [2024-11-06 15:36:08.902558] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:42.043 [2024-11-06 15:36:08.902653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.043 [2024-11-06 15:36:09.035576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.043 [2024-11-06 15:36:09.143795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.043 [2024-11-06 15:36:09.143840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.043 [2024-11-06 15:36:09.143852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.043 [2024-11-06 15:36:09.143863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.043 [2024-11-06 15:36:09.143872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.043 [2024-11-06 15:36:09.145377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 [2024-11-06 15:36:09.744471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 null0 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0c1ffaae6ccb46f59cfe97b5770d39fe 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 [2024-11-06 15:36:09.796714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.302 15:36:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.560 nvme0n1 00:31:42.560 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.560 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:42.560 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.560 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.560 [ 00:31:42.560 { 00:31:42.560 "name": "nvme0n1", 00:31:42.560 "aliases": [ 00:31:42.560 "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe" 00:31:42.560 ], 00:31:42.560 "product_name": "NVMe disk", 00:31:42.560 "block_size": 512, 00:31:42.560 "num_blocks": 2097152, 00:31:42.560 "uuid": "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe", 00:31:42.560 "numa_id": 1, 00:31:42.560 "assigned_rate_limits": { 00:31:42.560 "rw_ios_per_sec": 0, 00:31:42.560 "rw_mbytes_per_sec": 0, 00:31:42.561 "r_mbytes_per_sec": 0, 00:31:42.561 "w_mbytes_per_sec": 0 00:31:42.561 }, 00:31:42.561 "claimed": false, 00:31:42.561 "zoned": false, 00:31:42.561 "supported_io_types": { 00:31:42.561 "read": true, 00:31:42.561 "write": true, 00:31:42.561 "unmap": false, 00:31:42.561 "flush": true, 00:31:42.561 "reset": true, 00:31:42.561 "nvme_admin": true, 00:31:42.561 "nvme_io": true, 00:31:42.561 "nvme_io_md": false, 00:31:42.561 "write_zeroes": true, 00:31:42.561 "zcopy": false, 00:31:42.561 "get_zone_info": false, 00:31:42.561 "zone_management": false, 00:31:42.561 "zone_append": false, 00:31:42.561 "compare": true, 00:31:42.561 "compare_and_write": true, 00:31:42.561 "abort": true, 00:31:42.561 "seek_hole": false, 00:31:42.561 "seek_data": false, 00:31:42.561 "copy": true, 00:31:42.561 "nvme_iov_md": false 00:31:42.561 }, 00:31:42.561 "memory_domains": [ 00:31:42.561 { 00:31:42.561 "dma_device_id": "system", 00:31:42.561 "dma_device_type": 1 00:31:42.561 } 00:31:42.561 ], 00:31:42.561 "driver_specific": { 00:31:42.561 "nvme": [ 00:31:42.561 { 00:31:42.561 "trid": { 00:31:42.561 "trtype": "TCP", 00:31:42.561 "adrfam": "IPv4", 00:31:42.561 "traddr": "10.0.0.2", 00:31:42.561 "trsvcid": "4420", 00:31:42.561 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:42.561 }, 00:31:42.561 "ctrlr_data": { 00:31:42.561 "cntlid": 1, 00:31:42.561 "vendor_id": "0x8086", 00:31:42.561 "model_number": "SPDK bdev Controller", 00:31:42.561 "serial_number": "00000000000000000000", 00:31:42.561 "firmware_revision": "25.01", 00:31:42.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.561 "oacs": { 00:31:42.561 "security": 0, 00:31:42.561 "format": 0, 00:31:42.561 "firmware": 0, 00:31:42.561 "ns_manage": 0 00:31:42.561 }, 00:31:42.561 "multi_ctrlr": true, 00:31:42.561 "ana_reporting": false 00:31:42.561 }, 00:31:42.561 "vs": { 00:31:42.561 "nvme_version": "1.3" 00:31:42.561 }, 00:31:42.561 "ns_data": { 00:31:42.561 "id": 1, 00:31:42.561 "can_share": true 00:31:42.561 } 00:31:42.561 } 00:31:42.561 ], 00:31:42.561 "mp_policy": "active_passive" 00:31:42.561 } 00:31:42.561 } 00:31:42.561 ] 00:31:42.561 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.561 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:42.561 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.561 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.561 [2024-11-06 15:36:10.066850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:42.561 [2024-11-06 15:36:10.066937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:31:42.820 [2024-11-06 15:36:10.209339] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 [ 00:31:42.820 { 00:31:42.820 "name": "nvme0n1", 00:31:42.820 "aliases": [ 00:31:42.820 "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe" 00:31:42.820 ], 00:31:42.820 "product_name": "NVMe disk", 00:31:42.820 "block_size": 512, 00:31:42.820 "num_blocks": 2097152, 00:31:42.820 "uuid": "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe", 00:31:42.820 "numa_id": 1, 00:31:42.820 "assigned_rate_limits": { 00:31:42.820 "rw_ios_per_sec": 0, 00:31:42.820 "rw_mbytes_per_sec": 0, 00:31:42.820 "r_mbytes_per_sec": 0, 00:31:42.820 "w_mbytes_per_sec": 0 00:31:42.820 }, 00:31:42.820 "claimed": false, 00:31:42.820 "zoned": false, 00:31:42.820 "supported_io_types": { 00:31:42.820 "read": true, 00:31:42.820 "write": true, 00:31:42.820 "unmap": false, 00:31:42.820 "flush": true, 00:31:42.820 "reset": true, 00:31:42.820 "nvme_admin": true, 00:31:42.820 "nvme_io": true, 00:31:42.820 "nvme_io_md": false, 00:31:42.820 "write_zeroes": true, 00:31:42.820 "zcopy": false, 00:31:42.820 "get_zone_info": false, 00:31:42.820 "zone_management": false, 00:31:42.820 "zone_append": false, 00:31:42.820 "compare": true, 00:31:42.820 "compare_and_write": true, 00:31:42.820 "abort": true, 00:31:42.820 "seek_hole": false, 00:31:42.820 "seek_data": false, 00:31:42.820 "copy": true, 00:31:42.820 "nvme_iov_md": false 00:31:42.820 }, 00:31:42.820 "memory_domains": [ 00:31:42.820 { 00:31:42.820 "dma_device_id": "system", 00:31:42.820 "dma_device_type": 1 00:31:42.820 } 00:31:42.820 ], 00:31:42.820 "driver_specific": { 00:31:42.820 "nvme": [ 00:31:42.820 { 00:31:42.820 "trid": { 00:31:42.820 "trtype": "TCP", 00:31:42.820 "adrfam": "IPv4", 00:31:42.820 "traddr": "10.0.0.2", 00:31:42.820 "trsvcid": "4420", 00:31:42.820 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:42.820 }, 00:31:42.820 "ctrlr_data": { 00:31:42.820 "cntlid": 2, 00:31:42.820 "vendor_id": "0x8086", 00:31:42.820 "model_number": "SPDK bdev Controller", 00:31:42.820 "serial_number": "00000000000000000000", 00:31:42.820 "firmware_revision": "25.01", 00:31:42.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.820 "oacs": { 00:31:42.820 "security": 0, 00:31:42.820 "format": 0, 00:31:42.820 "firmware": 0, 00:31:42.820 "ns_manage": 0 00:31:42.820 }, 00:31:42.820 "multi_ctrlr": true, 00:31:42.820 "ana_reporting": false 00:31:42.820 }, 00:31:42.820 "vs": { 00:31:42.820 "nvme_version": "1.3" 00:31:42.820 }, 00:31:42.820 "ns_data": { 00:31:42.820 "id": 1, 00:31:42.820 "can_share": true 00:31:42.820 } 00:31:42.820 } 00:31:42.820 ], 00:31:42.820 "mp_policy": "active_passive" 00:31:42.820 } 00:31:42.820 } 00:31:42.820 ] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4MV0yo0S9v 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4MV0yo0S9v 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4MV0yo0S9v 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 [2024-11-06 15:36:10.283539] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:42.820 [2024-11-06 15:36:10.283695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 [2024-11-06 15:36:10.303590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:42.820 nvme0n1 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.820 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.820 [ 00:31:42.820 { 00:31:42.820 "name": "nvme0n1", 00:31:42.820 "aliases": [ 00:31:42.820 "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe" 00:31:42.820 ], 00:31:42.820 "product_name": "NVMe disk", 00:31:42.820 "block_size": 512, 00:31:42.820 "num_blocks": 2097152, 00:31:42.820 "uuid": "0c1ffaae-6ccb-46f5-9cfe-97b5770d39fe", 00:31:42.820 "numa_id": 1, 00:31:42.820 "assigned_rate_limits": { 00:31:42.820 "rw_ios_per_sec": 0, 00:31:42.820 "rw_mbytes_per_sec": 0, 00:31:42.820 "r_mbytes_per_sec": 0, 00:31:42.820 "w_mbytes_per_sec": 0 00:31:42.820 }, 00:31:42.820 "claimed": false, 00:31:42.820 "zoned": false, 00:31:42.820 "supported_io_types": { 00:31:42.821 "read": true, 00:31:42.821 "write": true, 00:31:42.821 "unmap": false, 00:31:42.821 "flush": true, 00:31:42.821 "reset": true, 00:31:42.821 "nvme_admin": true, 00:31:42.821 "nvme_io": true, 00:31:42.821 "nvme_io_md": false, 00:31:42.821 "write_zeroes": true, 00:31:42.821 "zcopy": false, 00:31:42.821 "get_zone_info": false, 00:31:42.821 "zone_management": false, 00:31:42.821 "zone_append": false, 00:31:42.821 "compare": true, 00:31:42.821 "compare_and_write": true, 00:31:42.821 "abort": true, 00:31:42.821 "seek_hole": false, 00:31:42.821 "seek_data": false, 00:31:42.821 "copy": true, 00:31:42.821 "nvme_iov_md": false 00:31:42.821 }, 00:31:42.821 "memory_domains": [ 00:31:42.821 { 00:31:42.821 "dma_device_id": "system", 00:31:42.821 "dma_device_type": 1 00:31:42.821 } 00:31:42.821 ], 00:31:42.821 "driver_specific": { 00:31:42.821 "nvme": [ 00:31:42.821 { 00:31:42.821 "trid": { 00:31:42.821 "trtype": "TCP", 00:31:42.821 "adrfam": "IPv4", 00:31:42.821 "traddr": "10.0.0.2", 00:31:42.821 "trsvcid": "4421", 00:31:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:42.821 }, 00:31:42.821 "ctrlr_data": { 00:31:42.821 "cntlid": 3, 00:31:42.821 "vendor_id": "0x8086", 00:31:42.821 "model_number": "SPDK bdev Controller", 00:31:42.821 "serial_number": "00000000000000000000", 00:31:42.821 "firmware_revision": "25.01", 00:31:42.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.821 "oacs": { 00:31:42.821 "security": 0, 00:31:42.821 "format": 0, 00:31:42.821 "firmware": 0, 00:31:42.821 "ns_manage": 0 00:31:42.821 }, 00:31:42.821 "multi_ctrlr": true, 00:31:42.821 "ana_reporting": false 00:31:42.821 }, 00:31:42.821 "vs": { 00:31:42.821 "nvme_version": "1.3" 00:31:42.821 }, 00:31:42.821 "ns_data": { 00:31:42.821 "id": 1, 00:31:42.821 "can_share": true 00:31:42.821 } 00:31:42.821 } 00:31:42.821 ], 00:31:42.821 "mp_policy": "active_passive" 00:31:42.821 } 00:31:42.821 } 00:31:42.821 ] 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4MV0yo0S9v 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.821 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.821 rmmod nvme_tcp 00:31:42.821 rmmod nvme_fabrics 00:31:43.080 rmmod nvme_keyring 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4003903 ']' 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4003903 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 4003903 ']' 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 4003903 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4003903 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4003903' 00:31:43.080 killing process with pid 4003903 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 4003903 00:31:43.080 15:36:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 4003903 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:44.016 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.017 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.017 15:36:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.548 00:31:46.548 real 0m10.984s 00:31:46.548 user 0m4.623s 00:31:46.548 sys 0m4.949s 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:46.548 ************************************ 00:31:46.548 END TEST nvmf_async_init 00:31:46.548 ************************************ 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.548 ************************************ 00:31:46.548 START TEST dma 00:31:46.548 ************************************ 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:46.548 * Looking for test storage... 00:31:46.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:46.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.548 --rc genhtml_branch_coverage=1 00:31:46.548 --rc genhtml_function_coverage=1 00:31:46.548 --rc genhtml_legend=1 00:31:46.548 --rc geninfo_all_blocks=1 00:31:46.548 --rc geninfo_unexecuted_blocks=1 00:31:46.548 00:31:46.548 ' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:46.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.548 --rc genhtml_branch_coverage=1 00:31:46.548 --rc genhtml_function_coverage=1 00:31:46.548 --rc genhtml_legend=1 00:31:46.548 --rc geninfo_all_blocks=1 00:31:46.548 --rc geninfo_unexecuted_blocks=1 00:31:46.548 00:31:46.548 ' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:46.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.548 --rc genhtml_branch_coverage=1 00:31:46.548 --rc genhtml_function_coverage=1 00:31:46.548 --rc genhtml_legend=1 00:31:46.548 --rc geninfo_all_blocks=1 00:31:46.548 --rc geninfo_unexecuted_blocks=1 00:31:46.548 00:31:46.548 ' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:46.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.548 --rc genhtml_branch_coverage=1 00:31:46.548 --rc genhtml_function_coverage=1 00:31:46.548 --rc genhtml_legend=1 00:31:46.548 --rc geninfo_all_blocks=1 00:31:46.548 --rc geninfo_unexecuted_blocks=1 00:31:46.548 00:31:46.548 ' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.548 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:46.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:46.549 00:31:46.549 real 0m0.205s 00:31:46.549 user 0m0.124s 00:31:46.549 sys 0m0.095s 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:46.549 ************************************ 00:31:46.549 END TEST dma 00:31:46.549 ************************************ 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:46.549 15:36:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.549 ************************************ 00:31:46.549 START TEST nvmf_identify 00:31:46.549 ************************************ 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:46.549 * Looking for test storage... 00:31:46.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.549 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.808 --rc genhtml_branch_coverage=1 00:31:46.808 --rc genhtml_function_coverage=1 00:31:46.808 --rc genhtml_legend=1 00:31:46.808 --rc geninfo_all_blocks=1 00:31:46.808 --rc geninfo_unexecuted_blocks=1 00:31:46.808 00:31:46.808 ' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.808 --rc genhtml_branch_coverage=1 00:31:46.808 --rc genhtml_function_coverage=1 00:31:46.808 --rc genhtml_legend=1 00:31:46.808 --rc geninfo_all_blocks=1 00:31:46.808 --rc geninfo_unexecuted_blocks=1 00:31:46.808 00:31:46.808 ' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.808 --rc genhtml_branch_coverage=1 00:31:46.808 --rc genhtml_function_coverage=1 00:31:46.808 --rc genhtml_legend=1 00:31:46.808 --rc geninfo_all_blocks=1 00:31:46.808 --rc geninfo_unexecuted_blocks=1 00:31:46.808 00:31:46.808 ' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.808 --rc genhtml_branch_coverage=1 00:31:46.808 --rc genhtml_function_coverage=1 00:31:46.808 --rc genhtml_legend=1 00:31:46.808 --rc geninfo_all_blocks=1 00:31:46.808 --rc geninfo_unexecuted_blocks=1 00:31:46.808 00:31:46.808 ' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.808 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:46.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.809 15:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:53.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:53.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:53.374 Found net devices under 0000:86:00.0: cvl_0_0 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:53.374 Found net devices under 0000:86:00.1: cvl_0_1 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.374 15:36:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:31:53.374 00:31:53.374 --- 10.0.0.2 ping statistics --- 00:31:53.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.374 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:31:53.374 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:31:53.375 00:31:53.375 --- 10.0.0.1 ping statistics --- 00:31:53.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.375 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4008167 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4008167 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 4008167 ']' 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:53.375 15:36:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.375 [2024-11-06 15:36:20.275664] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:53.375 [2024-11-06 15:36:20.275756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.375 [2024-11-06 15:36:20.406152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.375 [2024-11-06 15:36:20.514926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.375 [2024-11-06 15:36:20.514970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.375 [2024-11-06 15:36:20.514980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.375 [2024-11-06 15:36:20.514990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.375 [2024-11-06 15:36:20.514998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.375 [2024-11-06 15:36:20.517397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.375 [2024-11-06 15:36:20.517485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.375 [2024-11-06 15:36:20.517552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.375 [2024-11-06 15:36:20.517575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.633 [2024-11-06 15:36:21.109820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.633 Malloc0 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.633 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.894 [2024-11-06 15:36:21.274818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.894 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.895 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:53.895 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.895 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:53.895 [ 00:31:53.895 { 00:31:53.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:53.895 "subtype": "Discovery", 00:31:53.895 "listen_addresses": [ 00:31:53.895 { 00:31:53.895 "trtype": "TCP", 00:31:53.895 "adrfam": "IPv4", 00:31:53.895 "traddr": "10.0.0.2", 00:31:53.895 "trsvcid": "4420" 00:31:53.895 } 00:31:53.895 ], 00:31:53.895 "allow_any_host": true, 00:31:53.895 "hosts": [] 00:31:53.895 }, 00:31:53.895 { 00:31:53.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.895 "subtype": "NVMe", 00:31:53.895 "listen_addresses": [ 00:31:53.895 { 00:31:53.895 "trtype": "TCP", 00:31:53.895 "adrfam": "IPv4", 00:31:53.895 "traddr": "10.0.0.2", 00:31:53.895 "trsvcid": "4420" 00:31:53.895 } 00:31:53.895 ], 00:31:53.895 "allow_any_host": true, 00:31:53.895 "hosts": [], 00:31:53.895 "serial_number": "SPDK00000000000001", 00:31:53.895 "model_number": "SPDK bdev Controller", 00:31:53.895 "max_namespaces": 32, 00:31:53.895 "min_cntlid": 1, 00:31:53.895 "max_cntlid": 65519, 00:31:53.895 "namespaces": [ 00:31:53.895 { 00:31:53.895 "nsid": 1, 00:31:53.895 "bdev_name": "Malloc0", 00:31:53.895 "name": "Malloc0", 00:31:53.895 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:53.895 "eui64": "ABCDEF0123456789", 00:31:53.895 "uuid": "6ec477c4-0cbd-45a7-b1f2-5fefe635eb52" 00:31:53.895 } 00:31:53.895 ] 00:31:53.895 } 00:31:53.895 ] 00:31:53.895 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.895 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:53.895 [2024-11-06 15:36:21.347032] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:53.895 [2024-11-06 15:36:21.347097] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008417 ] 00:31:53.895 [2024-11-06 15:36:21.405817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:53.895 [2024-11-06 15:36:21.405917] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:53.895 [2024-11-06 15:36:21.405927] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:53.895 [2024-11-06 15:36:21.405949] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:53.895 [2024-11-06 15:36:21.405963] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:53.895 [2024-11-06 15:36:21.409581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:53.895 [2024-11-06 15:36:21.409631] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:53.895 [2024-11-06 15:36:21.417239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:53.895 [2024-11-06 15:36:21.417267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:53.895 [2024-11-06 15:36:21.417276] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:53.895 [2024-11-06 15:36:21.417283] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:53.895 [2024-11-06 15:36:21.417336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.417346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.417354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.417378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:53.895 [2024-11-06 15:36:21.417403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.895 [2024-11-06 15:36:21.424223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.895 [2024-11-06 15:36:21.424247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.895 [2024-11-06 15:36:21.424254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.895 [2024-11-06 15:36:21.424282] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:53.895 [2024-11-06 15:36:21.424295] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:53.895 [2024-11-06 15:36:21.424310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:53.895 [2024-11-06 15:36:21.424329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.424359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.895 [2024-11-06 15:36:21.424381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.895 [2024-11-06 15:36:21.424491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.895 [2024-11-06 15:36:21.424501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.895 [2024-11-06 15:36:21.424508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.895 [2024-11-06 15:36:21.424529] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:53.895 [2024-11-06 15:36:21.424541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:53.895 [2024-11-06 15:36:21.424552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.424577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.895 [2024-11-06 15:36:21.424596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.895 [2024-11-06 15:36:21.424665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.895 [2024-11-06 15:36:21.424674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.895 [2024-11-06 15:36:21.424679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.895 [2024-11-06 15:36:21.424695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:53.895 [2024-11-06 15:36:21.424707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:53.895 [2024-11-06 15:36:21.424720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.424743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.895 [2024-11-06 15:36:21.424759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.895 [2024-11-06 15:36:21.424823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.895 [2024-11-06 15:36:21.424832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.895 [2024-11-06 15:36:21.424838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.895 [2024-11-06 15:36:21.424851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:53.895 [2024-11-06 15:36:21.424865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.424890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.895 [2024-11-06 15:36:21.424907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.895 [2024-11-06 15:36:21.424978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.895 [2024-11-06 15:36:21.424987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.895 [2024-11-06 15:36:21.424992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.424997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.895 [2024-11-06 15:36:21.425005] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:53.895 [2024-11-06 15:36:21.425015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:53.895 [2024-11-06 15:36:21.425028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:53.895 [2024-11-06 15:36:21.425137] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:53.895 [2024-11-06 15:36:21.425144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:53.895 [2024-11-06 15:36:21.425162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.425169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.895 [2024-11-06 15:36:21.425175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.895 [2024-11-06 15:36:21.425185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.896 [2024-11-06 15:36:21.425210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.896 [2024-11-06 15:36:21.425279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.896 [2024-11-06 15:36:21.425288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.896 [2024-11-06 15:36:21.425293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.896 [2024-11-06 15:36:21.425308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:53.896 [2024-11-06 15:36:21.425322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.425347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.896 [2024-11-06 15:36:21.425362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.896 [2024-11-06 15:36:21.425443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.896 [2024-11-06 15:36:21.425455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.896 [2024-11-06 15:36:21.425460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.896 [2024-11-06 15:36:21.425473] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:53.896 [2024-11-06 15:36:21.425483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.425494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:53.896 [2024-11-06 15:36:21.425513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.425530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.425549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.896 [2024-11-06 15:36:21.425564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.896 [2024-11-06 15:36:21.425674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:53.896 [2024-11-06 15:36:21.425683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:53.896 [2024-11-06 15:36:21.425691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:53.896 [2024-11-06 15:36:21.425706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:53.896 [2024-11-06 15:36:21.425714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.425743] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.896 [2024-11-06 15:36:21.466301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.896 [2024-11-06 15:36:21.466307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.896 [2024-11-06 15:36:21.466334] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:53.896 [2024-11-06 15:36:21.466343] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:53.896 [2024-11-06 15:36:21.466350] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:53.896 [2024-11-06 15:36:21.466361] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:53.896 [2024-11-06 15:36:21.466368] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:53.896 [2024-11-06 15:36:21.466376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.466392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.466403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:53.896 [2024-11-06 15:36:21.466455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.896 [2024-11-06 15:36:21.466540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.896 [2024-11-06 15:36:21.466549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.896 [2024-11-06 15:36:21.466553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:53.896 [2024-11-06 15:36:21.466570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.896 [2024-11-06 15:36:21.466601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.896 [2024-11-06 15:36:21.466628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.896 [2024-11-06 15:36:21.466654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.896 [2024-11-06 15:36:21.466679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.466693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:53.896 [2024-11-06 15:36:21.466702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.896 [2024-11-06 15:36:21.466737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:53.896 [2024-11-06 15:36:21.466744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:53.896 [2024-11-06 15:36:21.466751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:53.896 [2024-11-06 15:36:21.466757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:53.896 [2024-11-06 15:36:21.466763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:53.896 [2024-11-06 15:36:21.466876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.896 [2024-11-06 15:36:21.466885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.896 [2024-11-06 15:36:21.466890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:53.896 [2024-11-06 15:36:21.466906] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:53.896 [2024-11-06 15:36:21.466914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:53.896 [2024-11-06 15:36:21.466932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.466939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:53.896 [2024-11-06 15:36:21.466950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.896 [2024-11-06 15:36:21.466964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:53.896 [2024-11-06 15:36:21.467055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:53.896 [2024-11-06 15:36:21.467064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:53.896 [2024-11-06 15:36:21.467073] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.467079] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:53.896 [2024-11-06 15:36:21.467087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:53.896 [2024-11-06 15:36:21.467093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.467111] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.467117] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:53.896 [2024-11-06 15:36:21.467128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.897 [2024-11-06 15:36:21.467135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.897 [2024-11-06 15:36:21.467140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:53.897 [2024-11-06 15:36:21.467167] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:53.897 [2024-11-06 15:36:21.467224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:53.897 [2024-11-06 15:36:21.467243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.897 [2024-11-06 15:36:21.467252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:53.897 [2024-11-06 15:36:21.467273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.897 [2024-11-06 15:36:21.467290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:53.897 [2024-11-06 15:36:21.467298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:53.897 [2024-11-06 15:36:21.467457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:53.897 [2024-11-06 15:36:21.467466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:53.897 [2024-11-06 15:36:21.467472] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467481] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:53.897 [2024-11-06 15:36:21.467488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:53.897 [2024-11-06 15:36:21.467497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467508] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467515] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.897 [2024-11-06 15:36:21.467530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.897 [2024-11-06 15:36:21.467535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.467541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:53.897 [2024-11-06 15:36:21.508279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.897 [2024-11-06 15:36:21.508298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.897 [2024-11-06 15:36:21.508303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:53.897 [2024-11-06 15:36:21.508346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:53.897 [2024-11-06 15:36:21.508366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.897 [2024-11-06 15:36:21.508392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:53.897 [2024-11-06 15:36:21.508500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:53.897 [2024-11-06 15:36:21.508509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:53.897 [2024-11-06 15:36:21.508514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508519] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:53.897 [2024-11-06 15:36:21.508525] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:53.897 [2024-11-06 15:36:21.508531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508540] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508546] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:53.897 [2024-11-06 15:36:21.508564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:53.897 [2024-11-06 15:36:21.508568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:53.897 [2024-11-06 15:36:21.508588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:53.897 [2024-11-06 15:36:21.508605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.897 [2024-11-06 15:36:21.508625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:53.897 [2024-11-06 15:36:21.508721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:53.897 [2024-11-06 15:36:21.508729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:53.897 [2024-11-06 15:36:21.508733] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508739] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:53.897 [2024-11-06 15:36:21.508745] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:53.897 [2024-11-06 15:36:21.508753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508764] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:53.897 [2024-11-06 15:36:21.508770] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.160 [2024-11-06 15:36:21.549305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.160 [2024-11-06 15:36:21.549325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.160 [2024-11-06 15:36:21.549331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.160 [2024-11-06 15:36:21.549337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.160 ===================================================== 00:31:54.160 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:54.160 ===================================================== 00:31:54.160 Controller Capabilities/Features 00:31:54.160 ================================ 00:31:54.160 Vendor ID: 0000 00:31:54.160 Subsystem Vendor ID: 0000 00:31:54.160 Serial Number: .................... 00:31:54.160 Model Number: ........................................ 00:31:54.160 Firmware Version: 25.01 00:31:54.160 Recommended Arb Burst: 0 00:31:54.160 IEEE OUI Identifier: 00 00 00 00:31:54.160 Multi-path I/O 00:31:54.160 May have multiple subsystem ports: No 00:31:54.160 May have multiple controllers: No 00:31:54.160 Associated with SR-IOV VF: No 00:31:54.160 Max Data Transfer Size: 131072 00:31:54.160 Max Number of Namespaces: 0 00:31:54.160 Max Number of I/O Queues: 1024 00:31:54.160 NVMe Specification Version (VS): 1.3 00:31:54.160 NVMe Specification Version (Identify): 1.3 00:31:54.160 Maximum Queue Entries: 128 00:31:54.160 Contiguous Queues Required: Yes 00:31:54.160 Arbitration Mechanisms Supported 00:31:54.160 Weighted Round Robin: Not Supported 00:31:54.160 Vendor Specific: Not Supported 00:31:54.160 Reset Timeout: 15000 ms 00:31:54.160 Doorbell Stride: 4 bytes 00:31:54.160 NVM Subsystem Reset: Not Supported 00:31:54.160 Command Sets Supported 00:31:54.160 NVM Command Set: Supported 00:31:54.160 Boot Partition: Not Supported 00:31:54.160 Memory Page Size Minimum: 4096 bytes 00:31:54.160 Memory Page Size Maximum: 4096 bytes 00:31:54.160 Persistent Memory Region: Not Supported 00:31:54.160 Optional Asynchronous Events Supported 00:31:54.160 Namespace Attribute Notices: Not Supported 00:31:54.160 Firmware Activation Notices: Not Supported 00:31:54.160 ANA Change Notices: Not Supported 00:31:54.160 PLE Aggregate Log Change Notices: Not Supported 00:31:54.160 LBA Status Info Alert Notices: Not Supported 00:31:54.160 EGE Aggregate Log Change Notices: Not Supported 00:31:54.160 Normal NVM Subsystem Shutdown event: Not Supported 00:31:54.160 Zone Descriptor Change Notices: Not Supported 00:31:54.160 Discovery Log Change Notices: Supported 00:31:54.160 Controller Attributes 00:31:54.160 128-bit Host Identifier: Not Supported 00:31:54.160 Non-Operational Permissive Mode: Not Supported 00:31:54.160 NVM Sets: Not Supported 00:31:54.160 Read Recovery Levels: Not Supported 00:31:54.160 Endurance Groups: Not Supported 00:31:54.160 Predictable Latency Mode: Not Supported 00:31:54.160 Traffic Based Keep ALive: Not Supported 00:31:54.160 Namespace Granularity: Not Supported 00:31:54.160 SQ Associations: Not Supported 00:31:54.160 UUID List: Not Supported 00:31:54.160 Multi-Domain Subsystem: Not Supported 00:31:54.160 Fixed Capacity Management: Not Supported 00:31:54.160 Variable Capacity Management: Not Supported 00:31:54.160 Delete Endurance Group: Not Supported 00:31:54.160 Delete NVM Set: Not Supported 00:31:54.160 Extended LBA Formats Supported: Not Supported 00:31:54.160 Flexible Data Placement Supported: Not Supported 00:31:54.160 00:31:54.160 Controller Memory Buffer Support 00:31:54.160 ================================ 00:31:54.160 Supported: No 00:31:54.160 00:31:54.160 Persistent Memory Region Support 00:31:54.160 ================================ 00:31:54.160 Supported: No 00:31:54.160 00:31:54.160 Admin Command Set Attributes 00:31:54.160 ============================ 00:31:54.160 Security Send/Receive: Not Supported 00:31:54.160 Format NVM: Not Supported 00:31:54.160 Firmware Activate/Download: Not Supported 00:31:54.160 Namespace Management: Not Supported 00:31:54.160 Device Self-Test: Not Supported 00:31:54.160 Directives: Not Supported 00:31:54.160 NVMe-MI: Not Supported 00:31:54.160 Virtualization Management: Not Supported 00:31:54.160 Doorbell Buffer Config: Not Supported 00:31:54.160 Get LBA Status Capability: Not Supported 00:31:54.160 Command & Feature Lockdown Capability: Not Supported 00:31:54.160 Abort Command Limit: 1 00:31:54.160 Async Event Request Limit: 4 00:31:54.160 Number of Firmware Slots: N/A 00:31:54.160 Firmware Slot 1 Read-Only: N/A 00:31:54.160 Firmware Activation Without Reset: N/A 00:31:54.160 Multiple Update Detection Support: N/A 00:31:54.160 Firmware Update Granularity: No Information Provided 00:31:54.160 Per-Namespace SMART Log: No 00:31:54.160 Asymmetric Namespace Access Log Page: Not Supported 00:31:54.160 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:54.160 Command Effects Log Page: Not Supported 00:31:54.160 Get Log Page Extended Data: Supported 00:31:54.160 Telemetry Log Pages: Not Supported 00:31:54.160 Persistent Event Log Pages: Not Supported 00:31:54.160 Supported Log Pages Log Page: May Support 00:31:54.160 Commands Supported & Effects Log Page: Not Supported 00:31:54.160 Feature Identifiers & Effects Log Page:May Support 00:31:54.160 NVMe-MI Commands & Effects Log Page: May Support 00:31:54.160 Data Area 4 for Telemetry Log: Not Supported 00:31:54.160 Error Log Page Entries Supported: 128 00:31:54.160 Keep Alive: Not Supported 00:31:54.160 00:31:54.160 NVM Command Set Attributes 00:31:54.160 ========================== 00:31:54.160 Submission Queue Entry Size 00:31:54.160 Max: 1 00:31:54.160 Min: 1 00:31:54.160 Completion Queue Entry Size 00:31:54.160 Max: 1 00:31:54.160 Min: 1 00:31:54.160 Number of Namespaces: 0 00:31:54.160 Compare Command: Not Supported 00:31:54.160 Write Uncorrectable Command: Not Supported 00:31:54.160 Dataset Management Command: Not Supported 00:31:54.160 Write Zeroes Command: Not Supported 00:31:54.160 Set Features Save Field: Not Supported 00:31:54.160 Reservations: Not Supported 00:31:54.160 Timestamp: Not Supported 00:31:54.160 Copy: Not Supported 00:31:54.160 Volatile Write Cache: Not Present 00:31:54.160 Atomic Write Unit (Normal): 1 00:31:54.160 Atomic Write Unit (PFail): 1 00:31:54.160 Atomic Compare & Write Unit: 1 00:31:54.160 Fused Compare & Write: Supported 00:31:54.160 Scatter-Gather List 00:31:54.160 SGL Command Set: Supported 00:31:54.160 SGL Keyed: Supported 00:31:54.160 SGL Bit Bucket Descriptor: Not Supported 00:31:54.160 SGL Metadata Pointer: Not Supported 00:31:54.160 Oversized SGL: Not Supported 00:31:54.160 SGL Metadata Address: Not Supported 00:31:54.160 SGL Offset: Supported 00:31:54.160 Transport SGL Data Block: Not Supported 00:31:54.160 Replay Protected Memory Block: Not Supported 00:31:54.160 00:31:54.160 Firmware Slot Information 00:31:54.160 ========================= 00:31:54.160 Active slot: 0 00:31:54.160 00:31:54.160 00:31:54.160 Error Log 00:31:54.160 ========= 00:31:54.160 00:31:54.160 Active Namespaces 00:31:54.160 ================= 00:31:54.160 Discovery Log Page 00:31:54.160 ================== 00:31:54.160 Generation Counter: 2 00:31:54.160 Number of Records: 2 00:31:54.160 Record Format: 0 00:31:54.160 00:31:54.160 Discovery Log Entry 0 00:31:54.160 ---------------------- 00:31:54.160 Transport Type: 3 (TCP) 00:31:54.160 Address Family: 1 (IPv4) 00:31:54.160 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:54.160 Entry Flags: 00:31:54.160 Duplicate Returned Information: 1 00:31:54.160 Explicit Persistent Connection Support for Discovery: 1 00:31:54.160 Transport Requirements: 00:31:54.160 Secure Channel: Not Required 00:31:54.160 Port ID: 0 (0x0000) 00:31:54.160 Controller ID: 65535 (0xffff) 00:31:54.160 Admin Max SQ Size: 128 00:31:54.160 Transport Service Identifier: 4420 00:31:54.160 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:54.160 Transport Address: 10.0.0.2 00:31:54.160 Discovery Log Entry 1 00:31:54.160 ---------------------- 00:31:54.160 Transport Type: 3 (TCP) 00:31:54.160 Address Family: 1 (IPv4) 00:31:54.161 Subsystem Type: 2 (NVM Subsystem) 00:31:54.161 Entry Flags: 00:31:54.161 Duplicate Returned Information: 0 00:31:54.161 Explicit Persistent Connection Support for Discovery: 0 00:31:54.161 Transport Requirements: 00:31:54.161 Secure Channel: Not Required 00:31:54.161 Port ID: 0 (0x0000) 00:31:54.161 Controller ID: 65535 (0xffff) 00:31:54.161 Admin Max SQ Size: 128 00:31:54.161 Transport Service Identifier: 4420 00:31:54.161 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:54.161 Transport Address: 10.0.0.2 [2024-11-06 15:36:21.549461] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:54.161 [2024-11-06 15:36:21.549476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.161 [2024-11-06 15:36:21.549495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.161 [2024-11-06 15:36:21.549509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.161 [2024-11-06 15:36:21.549523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.161 [2024-11-06 15:36:21.549542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.549571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.549592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.161 [2024-11-06 15:36:21.549681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.549690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.549696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.549739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.549759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.161 [2024-11-06 15:36:21.549850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.549859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.549864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.549884] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:54.161 [2024-11-06 15:36:21.549891] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:54.161 [2024-11-06 15:36:21.549905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.549918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.549928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.549944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.161 [2024-11-06 15:36:21.550014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.550022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.550027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.550046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.550066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.550080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.161 [2024-11-06 15:36:21.550152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.550161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.550166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.550183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.550194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.554212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.554240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.161 [2024-11-06 15:36:21.554414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.554423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.554428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.554434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.554446] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:31:54.161 00:31:54.161 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:54.161 [2024-11-06 15:36:21.651522] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:54.161 [2024-11-06 15:36:21.651591] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008424 ] 00:31:54.161 [2024-11-06 15:36:21.715698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:54.161 [2024-11-06 15:36:21.715806] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:54.161 [2024-11-06 15:36:21.715816] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:54.161 [2024-11-06 15:36:21.715837] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:54.161 [2024-11-06 15:36:21.715850] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:54.161 [2024-11-06 15:36:21.716489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:54.161 [2024-11-06 15:36:21.716530] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:54.161 [2024-11-06 15:36:21.730221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:54.161 [2024-11-06 15:36:21.730248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:54.161 [2024-11-06 15:36:21.730257] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:54.161 [2024-11-06 15:36:21.730265] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:54.161 [2024-11-06 15:36:21.730314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.730323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.730332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.730353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:54.161 [2024-11-06 15:36:21.730376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.161 [2024-11-06 15:36:21.738219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.738239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.161 [2024-11-06 15:36:21.738245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.738256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.161 [2024-11-06 15:36:21.738275] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:54.161 [2024-11-06 15:36:21.738288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:54.161 [2024-11-06 15:36:21.738297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:54.161 [2024-11-06 15:36:21.738312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.738319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.161 [2024-11-06 15:36:21.738328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.161 [2024-11-06 15:36:21.738341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.161 [2024-11-06 15:36:21.738362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.161 [2024-11-06 15:36:21.738547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.161 [2024-11-06 15:36:21.738556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.738562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.738582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:54.162 [2024-11-06 15:36:21.738595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:54.162 [2024-11-06 15:36:21.738608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.738634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.738650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.738735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.738743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.738748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.738761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:54.162 [2024-11-06 15:36:21.738774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.738783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.738807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.738826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.738896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.738905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.738910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.738922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.738937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.738950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.738960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.738973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.739050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.739058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.739063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.739075] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:54.162 [2024-11-06 15:36:21.739083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.739096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.739209] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:54.162 [2024-11-06 15:36:21.739216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.739233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.739257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.739273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.739355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.739364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.739369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.739381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:54.162 [2024-11-06 15:36:21.739394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.739418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.739432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.739511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.739518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.739523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.739535] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:54.162 [2024-11-06 15:36:21.739543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:54.162 [2024-11-06 15:36:21.739557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:54.162 [2024-11-06 15:36:21.739567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:54.162 [2024-11-06 15:36:21.739583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.739600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.162 [2024-11-06 15:36:21.739615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.739742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.162 [2024-11-06 15:36:21.739756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.162 [2024-11-06 15:36:21.739761] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:54.162 [2024-11-06 15:36:21.739776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:54.162 [2024-11-06 15:36:21.739785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739803] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.739823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.739828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.739847] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:54.162 [2024-11-06 15:36:21.739855] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:54.162 [2024-11-06 15:36:21.739862] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:54.162 [2024-11-06 15:36:21.739869] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:54.162 [2024-11-06 15:36:21.739876] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:54.162 [2024-11-06 15:36:21.739889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:54.162 [2024-11-06 15:36:21.739908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:54.162 [2024-11-06 15:36:21.739918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.739930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.162 [2024-11-06 15:36:21.739940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:54.162 [2024-11-06 15:36:21.739956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.162 [2024-11-06 15:36:21.740033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.162 [2024-11-06 15:36:21.740041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.162 [2024-11-06 15:36:21.740046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.740051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.162 [2024-11-06 15:36:21.740060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.740067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.162 [2024-11-06 15:36:21.740073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.163 [2024-11-06 15:36:21.740097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.163 [2024-11-06 15:36:21.740125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.163 [2024-11-06 15:36:21.740150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.163 [2024-11-06 15:36:21.740176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.163 [2024-11-06 15:36:21.740239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:54.163 [2024-11-06 15:36:21.740246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:54.163 [2024-11-06 15:36:21.740252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:54.163 [2024-11-06 15:36:21.740258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.163 [2024-11-06 15:36:21.740264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.163 [2024-11-06 15:36:21.740380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.163 [2024-11-06 15:36:21.740389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.163 [2024-11-06 15:36:21.740394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.163 [2024-11-06 15:36:21.740409] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:54.163 [2024-11-06 15:36:21.740417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:54.163 [2024-11-06 15:36:21.740485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.163 [2024-11-06 15:36:21.740559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.163 [2024-11-06 15:36:21.740568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.163 [2024-11-06 15:36:21.740572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.163 [2024-11-06 15:36:21.740652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.740686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.740703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.163 [2024-11-06 15:36:21.740719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.163 [2024-11-06 15:36:21.740824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.163 [2024-11-06 15:36:21.740832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.163 [2024-11-06 15:36:21.740837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740842] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:54.163 [2024-11-06 15:36:21.740848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:54.163 [2024-11-06 15:36:21.740854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740870] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.740877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.163 [2024-11-06 15:36:21.781391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.163 [2024-11-06 15:36:21.781396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.163 [2024-11-06 15:36:21.781435] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:54.163 [2024-11-06 15:36:21.781452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.781467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:54.163 [2024-11-06 15:36:21.781484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.163 [2024-11-06 15:36:21.781503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.163 [2024-11-06 15:36:21.781521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.163 [2024-11-06 15:36:21.781657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.163 [2024-11-06 15:36:21.781665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.163 [2024-11-06 15:36:21.781669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:54.163 [2024-11-06 15:36:21.781684] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:54.163 [2024-11-06 15:36:21.781696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781710] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.163 [2024-11-06 15:36:21.781715] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.492 [2024-11-06 15:36:21.826235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.492 [2024-11-06 15:36:21.826241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.492 [2024-11-06 15:36:21.826274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.826292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.826306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.492 [2024-11-06 15:36:21.826328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.492 [2024-11-06 15:36:21.826347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.492 [2024-11-06 15:36:21.826537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.492 [2024-11-06 15:36:21.826546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.492 [2024-11-06 15:36:21.826551] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826557] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:54.492 [2024-11-06 15:36:21.826563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:54.492 [2024-11-06 15:36:21.826569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826578] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.826583] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.867348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.492 [2024-11-06 15:36:21.867367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.492 [2024-11-06 15:36:21.867373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.867379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.492 [2024-11-06 15:36:21.867396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867454] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:54.492 [2024-11-06 15:36:21.867461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:54.492 [2024-11-06 15:36:21.867469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:54.492 [2024-11-06 15:36:21.867499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.492 [2024-11-06 15:36:21.867508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.492 [2024-11-06 15:36:21.867521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.867530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.867551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.493 [2024-11-06 15:36:21.867569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.493 [2024-11-06 15:36:21.867577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:54.493 [2024-11-06 15:36:21.867677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.493 [2024-11-06 15:36:21.867687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.493 [2024-11-06 15:36:21.867692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.493 [2024-11-06 15:36:21.867710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.493 [2024-11-06 15:36:21.867717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.493 [2024-11-06 15:36:21.867721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:54.493 [2024-11-06 15:36:21.867739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.867756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.867770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:54.493 [2024-11-06 15:36:21.867847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.493 [2024-11-06 15:36:21.867855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.493 [2024-11-06 15:36:21.867859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:54.493 [2024-11-06 15:36:21.867876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.867881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.867890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.867902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:54.493 [2024-11-06 15:36:21.867987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.493 [2024-11-06 15:36:21.867995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.493 [2024-11-06 15:36:21.868001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:54.493 [2024-11-06 15:36:21.868018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.868033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.868045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:54.493 [2024-11-06 15:36:21.868116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.493 [2024-11-06 15:36:21.868124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.493 [2024-11-06 15:36:21.868128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:54.493 [2024-11-06 15:36:21.868158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.868175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.868186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.868208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.868218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.868234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.868249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:54.493 [2024-11-06 15:36:21.868267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.493 [2024-11-06 15:36:21.868283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:54.493 [2024-11-06 15:36:21.868291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:54.493 [2024-11-06 15:36:21.868297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:54.493 [2024-11-06 15:36:21.868303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:54.493 [2024-11-06 15:36:21.868482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.493 [2024-11-06 15:36:21.868492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.493 [2024-11-06 15:36:21.868497] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.493 [2024-11-06 15:36:21.868503] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:54.493 [2024-11-06 15:36:21.868510] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:54.493 [2024-11-06 15:36:21.868516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868529] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.494 [2024-11-06 15:36:21.868549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.494 [2024-11-06 15:36:21.868554] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:54.494 [2024-11-06 15:36:21.868565] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:54.494 [2024-11-06 15:36:21.868570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868594] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868600] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.494 [2024-11-06 15:36:21.868613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.494 [2024-11-06 15:36:21.868618] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868624] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:54.494 [2024-11-06 15:36:21.868629] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:54.494 [2024-11-06 15:36:21.868635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:54.494 [2024-11-06 15:36:21.868661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:54.494 [2024-11-06 15:36:21.868666] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:54.494 [2024-11-06 15:36:21.868677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:54.494 [2024-11-06 15:36:21.868682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868690] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.494 [2024-11-06 15:36:21.868712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.494 [2024-11-06 15:36:21.868716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:54.494 [2024-11-06 15:36:21.868747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.494 [2024-11-06 15:36:21.868762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.494 [2024-11-06 15:36:21.868767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:54.494 [2024-11-06 15:36:21.868786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.494 [2024-11-06 15:36:21.868793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.494 [2024-11-06 15:36:21.868798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:54.494 [2024-11-06 15:36:21.868814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.494 [2024-11-06 15:36:21.868821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.494 [2024-11-06 15:36:21.868826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.494 [2024-11-06 15:36:21.868831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:54.494 ===================================================== 00:31:54.494 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:54.494 ===================================================== 00:31:54.494 Controller Capabilities/Features 00:31:54.494 ================================ 00:31:54.494 Vendor ID: 8086 00:31:54.494 Subsystem Vendor ID: 8086 00:31:54.494 Serial Number: SPDK00000000000001 00:31:54.494 Model Number: SPDK bdev Controller 00:31:54.494 Firmware Version: 25.01 00:31:54.494 Recommended Arb Burst: 6 00:31:54.494 IEEE OUI Identifier: e4 d2 5c 00:31:54.494 Multi-path I/O 00:31:54.494 May have multiple subsystem ports: Yes 00:31:54.494 May have multiple controllers: Yes 00:31:54.494 Associated with SR-IOV VF: No 00:31:54.494 Max Data Transfer Size: 131072 00:31:54.494 Max Number of Namespaces: 32 00:31:54.494 Max Number of I/O Queues: 127 00:31:54.494 NVMe Specification Version (VS): 1.3 00:31:54.494 NVMe Specification Version (Identify): 1.3 00:31:54.494 Maximum Queue Entries: 128 00:31:54.494 Contiguous Queues Required: Yes 00:31:54.494 Arbitration Mechanisms Supported 00:31:54.494 Weighted Round Robin: Not Supported 00:31:54.494 Vendor Specific: Not Supported 00:31:54.494 Reset Timeout: 15000 ms 00:31:54.494 Doorbell Stride: 4 bytes 00:31:54.494 NVM Subsystem Reset: Not Supported 00:31:54.494 Command Sets Supported 00:31:54.494 NVM Command Set: Supported 00:31:54.494 Boot Partition: Not Supported 00:31:54.494 Memory Page Size Minimum: 4096 bytes 00:31:54.494 Memory Page Size Maximum: 4096 bytes 00:31:54.495 Persistent Memory Region: Not Supported 00:31:54.495 Optional Asynchronous Events Supported 00:31:54.495 Namespace Attribute Notices: Supported 00:31:54.495 Firmware Activation Notices: Not Supported 00:31:54.495 ANA Change Notices: Not Supported 00:31:54.495 PLE Aggregate Log Change Notices: Not Supported 00:31:54.495 LBA Status Info Alert Notices: Not Supported 00:31:54.495 EGE Aggregate Log Change Notices: Not Supported 00:31:54.495 Normal NVM Subsystem Shutdown event: Not Supported 00:31:54.495 Zone Descriptor Change Notices: Not Supported 00:31:54.495 Discovery Log Change Notices: Not Supported 00:31:54.495 Controller Attributes 00:31:54.495 128-bit Host Identifier: Supported 00:31:54.495 Non-Operational Permissive Mode: Not Supported 00:31:54.495 NVM Sets: Not Supported 00:31:54.495 Read Recovery Levels: Not Supported 00:31:54.495 Endurance Groups: Not Supported 00:31:54.495 Predictable Latency Mode: Not Supported 00:31:54.495 Traffic Based Keep ALive: Not Supported 00:31:54.495 Namespace Granularity: Not Supported 00:31:54.495 SQ Associations: Not Supported 00:31:54.495 UUID List: Not Supported 00:31:54.495 Multi-Domain Subsystem: Not Supported 00:31:54.495 Fixed Capacity Management: Not Supported 00:31:54.495 Variable Capacity Management: Not Supported 00:31:54.495 Delete Endurance Group: Not Supported 00:31:54.495 Delete NVM Set: Not Supported 00:31:54.495 Extended LBA Formats Supported: Not Supported 00:31:54.495 Flexible Data Placement Supported: Not Supported 00:31:54.495 00:31:54.495 Controller Memory Buffer Support 00:31:54.495 ================================ 00:31:54.495 Supported: No 00:31:54.495 00:31:54.495 Persistent Memory Region Support 00:31:54.495 ================================ 00:31:54.495 Supported: No 00:31:54.495 00:31:54.495 Admin Command Set Attributes 00:31:54.495 ============================ 00:31:54.495 Security Send/Receive: Not Supported 00:31:54.495 Format NVM: Not Supported 00:31:54.495 Firmware Activate/Download: Not Supported 00:31:54.495 Namespace Management: Not Supported 00:31:54.495 Device Self-Test: Not Supported 00:31:54.495 Directives: Not Supported 00:31:54.495 NVMe-MI: Not Supported 00:31:54.495 Virtualization Management: Not Supported 00:31:54.495 Doorbell Buffer Config: Not Supported 00:31:54.495 Get LBA Status Capability: Not Supported 00:31:54.495 Command & Feature Lockdown Capability: Not Supported 00:31:54.495 Abort Command Limit: 4 00:31:54.495 Async Event Request Limit: 4 00:31:54.495 Number of Firmware Slots: N/A 00:31:54.495 Firmware Slot 1 Read-Only: N/A 00:31:54.495 Firmware Activation Without Reset: N/A 00:31:54.495 Multiple Update Detection Support: N/A 00:31:54.495 Firmware Update Granularity: No Information Provided 00:31:54.495 Per-Namespace SMART Log: No 00:31:54.495 Asymmetric Namespace Access Log Page: Not Supported 00:31:54.495 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:54.495 Command Effects Log Page: Supported 00:31:54.495 Get Log Page Extended Data: Supported 00:31:54.495 Telemetry Log Pages: Not Supported 00:31:54.495 Persistent Event Log Pages: Not Supported 00:31:54.495 Supported Log Pages Log Page: May Support 00:31:54.495 Commands Supported & Effects Log Page: Not Supported 00:31:54.495 Feature Identifiers & Effects Log Page:May Support 00:31:54.495 NVMe-MI Commands & Effects Log Page: May Support 00:31:54.495 Data Area 4 for Telemetry Log: Not Supported 00:31:54.495 Error Log Page Entries Supported: 128 00:31:54.495 Keep Alive: Supported 00:31:54.495 Keep Alive Granularity: 10000 ms 00:31:54.495 00:31:54.495 NVM Command Set Attributes 00:31:54.495 ========================== 00:31:54.495 Submission Queue Entry Size 00:31:54.495 Max: 64 00:31:54.495 Min: 64 00:31:54.495 Completion Queue Entry Size 00:31:54.495 Max: 16 00:31:54.495 Min: 16 00:31:54.495 Number of Namespaces: 32 00:31:54.495 Compare Command: Supported 00:31:54.495 Write Uncorrectable Command: Not Supported 00:31:54.495 Dataset Management Command: Supported 00:31:54.495 Write Zeroes Command: Supported 00:31:54.495 Set Features Save Field: Not Supported 00:31:54.495 Reservations: Supported 00:31:54.495 Timestamp: Not Supported 00:31:54.495 Copy: Supported 00:31:54.495 Volatile Write Cache: Present 00:31:54.495 Atomic Write Unit (Normal): 1 00:31:54.495 Atomic Write Unit (PFail): 1 00:31:54.495 Atomic Compare & Write Unit: 1 00:31:54.495 Fused Compare & Write: Supported 00:31:54.495 Scatter-Gather List 00:31:54.495 SGL Command Set: Supported 00:31:54.495 SGL Keyed: Supported 00:31:54.495 SGL Bit Bucket Descriptor: Not Supported 00:31:54.495 SGL Metadata Pointer: Not Supported 00:31:54.495 Oversized SGL: Not Supported 00:31:54.495 SGL Metadata Address: Not Supported 00:31:54.495 SGL Offset: Supported 00:31:54.495 Transport SGL Data Block: Not Supported 00:31:54.495 Replay Protected Memory Block: Not Supported 00:31:54.495 00:31:54.495 Firmware Slot Information 00:31:54.495 ========================= 00:31:54.495 Active slot: 1 00:31:54.495 Slot 1 Firmware Revision: 25.01 00:31:54.495 00:31:54.495 00:31:54.495 Commands Supported and Effects 00:31:54.495 ============================== 00:31:54.495 Admin Commands 00:31:54.495 -------------- 00:31:54.495 Get Log Page (02h): Supported 00:31:54.495 Identify (06h): Supported 00:31:54.495 Abort (08h): Supported 00:31:54.496 Set Features (09h): Supported 00:31:54.496 Get Features (0Ah): Supported 00:31:54.496 Asynchronous Event Request (0Ch): Supported 00:31:54.496 Keep Alive (18h): Supported 00:31:54.496 I/O Commands 00:31:54.496 ------------ 00:31:54.496 Flush (00h): Supported LBA-Change 00:31:54.496 Write (01h): Supported LBA-Change 00:31:54.496 Read (02h): Supported 00:31:54.496 Compare (05h): Supported 00:31:54.496 Write Zeroes (08h): Supported LBA-Change 00:31:54.496 Dataset Management (09h): Supported LBA-Change 00:31:54.496 Copy (19h): Supported LBA-Change 00:31:54.496 00:31:54.496 Error Log 00:31:54.496 ========= 00:31:54.496 00:31:54.496 Arbitration 00:31:54.496 =========== 00:31:54.496 Arbitration Burst: 1 00:31:54.496 00:31:54.496 Power Management 00:31:54.496 ================ 00:31:54.496 Number of Power States: 1 00:31:54.496 Current Power State: Power State #0 00:31:54.496 Power State #0: 00:31:54.496 Max Power: 0.00 W 00:31:54.496 Non-Operational State: Operational 00:31:54.496 Entry Latency: Not Reported 00:31:54.496 Exit Latency: Not Reported 00:31:54.496 Relative Read Throughput: 0 00:31:54.496 Relative Read Latency: 0 00:31:54.496 Relative Write Throughput: 0 00:31:54.496 Relative Write Latency: 0 00:31:54.496 Idle Power: Not Reported 00:31:54.496 Active Power: Not Reported 00:31:54.496 Non-Operational Permissive Mode: Not Supported 00:31:54.496 00:31:54.496 Health Information 00:31:54.496 ================== 00:31:54.496 Critical Warnings: 00:31:54.496 Available Spare Space: OK 00:31:54.496 Temperature: OK 00:31:54.496 Device Reliability: OK 00:31:54.496 Read Only: No 00:31:54.496 Volatile Memory Backup: OK 00:31:54.496 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:54.496 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:54.496 Available Spare: 0% 00:31:54.496 Available Spare Threshold: 0% 00:31:54.496 Life Percentage Used:[2024-11-06 15:36:21.868972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.868980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:54.496 [2024-11-06 15:36:21.868991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.496 [2024-11-06 15:36:21.869008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:54.496 [2024-11-06 15:36:21.869098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.496 [2024-11-06 15:36:21.869107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.496 [2024-11-06 15:36:21.869113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.869119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.869164] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:54.496 [2024-11-06 15:36:21.869177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.869190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.496 [2024-11-06 15:36:21.869198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.873217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.496 [2024-11-06 15:36:21.873229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.873236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.496 [2024-11-06 15:36:21.873242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.873249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.496 [2024-11-06 15:36:21.873261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.496 [2024-11-06 15:36:21.873285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.496 [2024-11-06 15:36:21.873306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.496 [2024-11-06 15:36:21.873462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.496 [2024-11-06 15:36:21.873472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.496 [2024-11-06 15:36:21.873477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.873498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.496 [2024-11-06 15:36:21.873521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.496 [2024-11-06 15:36:21.873543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.496 [2024-11-06 15:36:21.873635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.496 [2024-11-06 15:36:21.873643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.496 [2024-11-06 15:36:21.873648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.496 [2024-11-06 15:36:21.873653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.496 [2024-11-06 15:36:21.873661] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:54.497 [2024-11-06 15:36:21.873667] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:54.497 [2024-11-06 15:36:21.873679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.873705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.873719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.873786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.873795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.873799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.873817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.873839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.873852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.873938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.873950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.873954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.873971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.873982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.873991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.874004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.874074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.874082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.874087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.874104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.874126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.874139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.874222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.874231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.874236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.874253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.874273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.874287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.874355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.874364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.874368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.874385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.874405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.874418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.874498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.874506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.497 [2024-11-06 15:36:21.874511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.497 [2024-11-06 15:36:21.874528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.497 [2024-11-06 15:36:21.874539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.497 [2024-11-06 15:36:21.874548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.497 [2024-11-06 15:36:21.874561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.497 [2024-11-06 15:36:21.874627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.497 [2024-11-06 15:36:21.874635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.874640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.874656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.874678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.874691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.874764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.874771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.874779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.874798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.874817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.874831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.874900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.874908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.874918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.874936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.874946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.874959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.874972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.875049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.875060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.875065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.875082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.875101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.875114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.875182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.875190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.875195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.875219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.875240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.875254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.875324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.875332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.875336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.875353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.875372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.875385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.875463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.875471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.875476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.498 [2024-11-06 15:36:21.875492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.498 [2024-11-06 15:36:21.875502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.498 [2024-11-06 15:36:21.875511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.498 [2024-11-06 15:36:21.875524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.498 [2024-11-06 15:36:21.875595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.498 [2024-11-06 15:36:21.875603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.498 [2024-11-06 15:36:21.875607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.875624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.875643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.875656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.875723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.875731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.875736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.875753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.875774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.875787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.875856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.875864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.875868] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.875885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.875896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.875904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.875917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.875994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.499 [2024-11-06 15:36:21.876829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.499 [2024-11-06 15:36:21.876833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.499 [2024-11-06 15:36:21.876858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.499 [2024-11-06 15:36:21.876869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.499 [2024-11-06 15:36:21.876879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.499 [2024-11-06 15:36:21.876892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.499 [2024-11-06 15:36:21.876961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.500 [2024-11-06 15:36:21.876969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.500 [2024-11-06 15:36:21.876974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.876979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.500 [2024-11-06 15:36:21.876991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.876996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.877001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.500 [2024-11-06 15:36:21.877010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.500 [2024-11-06 15:36:21.877023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.500 [2024-11-06 15:36:21.877093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.500 [2024-11-06 15:36:21.877101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.500 [2024-11-06 15:36:21.877106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.877111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.500 [2024-11-06 15:36:21.877122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.877128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.877133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.500 [2024-11-06 15:36:21.877145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.500 [2024-11-06 15:36:21.877157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.500 [2024-11-06 15:36:21.881220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.500 [2024-11-06 15:36:21.881240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.500 [2024-11-06 15:36:21.881245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.881251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.500 [2024-11-06 15:36:21.881267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.881273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.881278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:54.500 [2024-11-06 15:36:21.881288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:54.500 [2024-11-06 15:36:21.881304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:54.500 [2024-11-06 15:36:21.881483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:54.500 [2024-11-06 15:36:21.881491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:54.500 [2024-11-06 15:36:21.881496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:54.500 [2024-11-06 15:36:21.881501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:54.500 [2024-11-06 15:36:21.881512] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:31:54.500 0% 00:31:54.500 Data Units Read: 0 00:31:54.500 Data Units Written: 0 00:31:54.500 Host Read Commands: 0 00:31:54.500 Host Write Commands: 0 00:31:54.500 Controller Busy Time: 0 minutes 00:31:54.500 Power Cycles: 0 00:31:54.500 Power On Hours: 0 hours 00:31:54.500 Unsafe Shutdowns: 0 00:31:54.500 Unrecoverable Media Errors: 0 00:31:54.500 Lifetime Error Log Entries: 0 00:31:54.500 Warning Temperature Time: 0 minutes 00:31:54.500 Critical Temperature Time: 0 minutes 00:31:54.500 00:31:54.500 Number of Queues 00:31:54.500 ================ 00:31:54.500 Number of I/O Submission Queues: 127 00:31:54.500 Number of I/O Completion Queues: 127 00:31:54.500 00:31:54.500 Active Namespaces 00:31:54.500 ================= 00:31:54.500 Namespace ID:1 00:31:54.500 Error Recovery Timeout: Unlimited 00:31:54.500 Command Set Identifier: NVM (00h) 00:31:54.500 Deallocate: Supported 00:31:54.500 Deallocated/Unwritten Error: Not Supported 00:31:54.500 Deallocated Read Value: Unknown 00:31:54.500 Deallocate in Write Zeroes: Not Supported 00:31:54.500 Deallocated Guard Field: 0xFFFF 00:31:54.500 Flush: Supported 00:31:54.500 Reservation: Supported 00:31:54.500 Namespace Sharing Capabilities: Multiple Controllers 00:31:54.500 Size (in LBAs): 131072 (0GiB) 00:31:54.500 Capacity (in LBAs): 131072 (0GiB) 00:31:54.500 Utilization (in LBAs): 131072 (0GiB) 00:31:54.500 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:54.500 EUI64: ABCDEF0123456789 00:31:54.500 UUID: 6ec477c4-0cbd-45a7-b1f2-5fefe635eb52 00:31:54.500 Thin Provisioning: Not Supported 00:31:54.500 Per-NS Atomic Units: Yes 00:31:54.500 Atomic Boundary Size (Normal): 0 00:31:54.500 Atomic Boundary Size (PFail): 0 00:31:54.500 Atomic Boundary Offset: 0 00:31:54.500 Maximum Single Source Range Length: 65535 00:31:54.500 Maximum Copy Length: 65535 00:31:54.500 Maximum Source Range Count: 1 00:31:54.500 NGUID/EUI64 Never Reused: No 00:31:54.500 Namespace Write Protected: No 00:31:54.500 Number of LBA Formats: 1 00:31:54.500 Current LBA Format: LBA Format #00 00:31:54.500 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:54.500 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.500 rmmod nvme_tcp 00:31:54.500 rmmod nvme_fabrics 00:31:54.500 rmmod nvme_keyring 00:31:54.500 15:36:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4008167 ']' 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4008167 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 4008167 ']' 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 4008167 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4008167 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4008167' 00:31:54.500 killing process with pid 4008167 00:31:54.500 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 4008167 00:31:54.760 15:36:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 4008167 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.137 15:36:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.042 00:31:58.042 real 0m11.427s 00:31:58.042 user 0m12.286s 00:31:58.042 sys 0m5.067s 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:58.042 ************************************ 00:31:58.042 END TEST nvmf_identify 00:31:58.042 ************************************ 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.042 ************************************ 00:31:58.042 START TEST nvmf_perf 00:31:58.042 ************************************ 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:58.042 * Looking for test storage... 00:31:58.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:58.042 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.302 --rc genhtml_branch_coverage=1 00:31:58.302 --rc genhtml_function_coverage=1 00:31:58.302 --rc genhtml_legend=1 00:31:58.302 --rc geninfo_all_blocks=1 00:31:58.302 --rc geninfo_unexecuted_blocks=1 00:31:58.302 00:31:58.302 ' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.302 --rc genhtml_branch_coverage=1 00:31:58.302 --rc genhtml_function_coverage=1 00:31:58.302 --rc genhtml_legend=1 00:31:58.302 --rc geninfo_all_blocks=1 00:31:58.302 --rc geninfo_unexecuted_blocks=1 00:31:58.302 00:31:58.302 ' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.302 --rc genhtml_branch_coverage=1 00:31:58.302 --rc genhtml_function_coverage=1 00:31:58.302 --rc genhtml_legend=1 00:31:58.302 --rc geninfo_all_blocks=1 00:31:58.302 --rc geninfo_unexecuted_blocks=1 00:31:58.302 00:31:58.302 ' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.302 --rc genhtml_branch_coverage=1 00:31:58.302 --rc genhtml_function_coverage=1 00:31:58.302 --rc genhtml_legend=1 00:31:58.302 --rc geninfo_all_blocks=1 00:31:58.302 --rc geninfo_unexecuted_blocks=1 00:31:58.302 00:31:58.302 ' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:58.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.302 15:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.004 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:05.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:05.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:05.005 Found net devices under 0000:86:00.0: cvl_0_0 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:05.005 Found net devices under 0000:86:00.1: cvl_0_1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:32:05.005 00:32:05.005 --- 10.0.0.2 ping statistics --- 00:32:05.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.005 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:05.005 00:32:05.005 --- 10.0.0.1 ping statistics --- 00:32:05.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.005 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4012180 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4012180 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 4012180 ']' 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:05.005 15:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:05.005 [2024-11-06 15:36:31.738617] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:32:05.005 [2024-11-06 15:36:31.738722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.006 [2024-11-06 15:36:31.866113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.006 [2024-11-06 15:36:31.973169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.006 [2024-11-06 15:36:31.973220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.006 [2024-11-06 15:36:31.973232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.006 [2024-11-06 15:36:31.973242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.006 [2024-11-06 15:36:31.973250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.006 [2024-11-06 15:36:31.975847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.006 [2024-11-06 15:36:31.975941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.006 [2024-11-06 15:36:31.976005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.006 [2024-11-06 15:36:31.976011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:05.006 15:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:08.297 15:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:08.297 15:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:08.297 15:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:32:08.297 15:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.556 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:08.557 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:32:08.557 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:08.557 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:08.557 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:08.816 [2024-11-06 15:36:36.307244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.816 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.074 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:09.074 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.333 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:09.333 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:09.590 15:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.590 [2024-11-06 15:36:37.146729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.590 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.849 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:32:09.849 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:32:09.849 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:09.849 15:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:32:11.227 Initializing NVMe Controllers 00:32:11.227 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:32:11.227 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:32:11.227 Initialization complete. Launching workers. 00:32:11.227 ======================================================== 00:32:11.227 Latency(us) 00:32:11.227 Device Information : IOPS MiB/s Average min max 00:32:11.227 PCIE (0000:5e:00.0) NSID 1 from core 0: 90619.85 353.98 352.61 33.20 8252.19 00:32:11.227 ======================================================== 00:32:11.227 Total : 90619.85 353.98 352.61 33.20 8252.19 00:32:11.227 00:32:11.227 15:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:13.138 Initializing NVMe Controllers 00:32:13.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:13.138 Initialization complete. Launching workers. 00:32:13.138 ======================================================== 00:32:13.138 Latency(us) 00:32:13.138 Device Information : IOPS MiB/s Average min max 00:32:13.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.72 0.31 12672.20 129.67 44819.79 00:32:13.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.81 0.21 19158.22 6971.05 54388.90 00:32:13.138 ======================================================== 00:32:13.138 Total : 133.53 0.52 15285.97 129.67 54388.90 00:32:13.138 00:32:13.138 15:36:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.513 Initializing NVMe Controllers 00:32:14.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:14.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:14.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:14.513 Initialization complete. Launching workers. 00:32:14.513 ======================================================== 00:32:14.514 Latency(us) 00:32:14.514 Device Information : IOPS MiB/s Average min max 00:32:14.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9445.00 36.89 3396.35 574.76 8328.45 00:32:14.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3833.00 14.97 8376.83 5337.74 16470.42 00:32:14.514 ======================================================== 00:32:14.514 Total : 13278.00 51.87 4834.08 574.76 16470.42 00:32:14.514 00:32:14.514 15:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:14.514 15:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:14.514 15:36:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:17.048 Initializing NVMe Controllers 00:32:17.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.048 Controller IO queue size 128, less than required. 00:32:17.048 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.048 Controller IO queue size 128, less than required. 00:32:17.048 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:17.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:17.048 Initialization complete. Launching workers. 00:32:17.048 ======================================================== 00:32:17.048 Latency(us) 00:32:17.048 Device Information : IOPS MiB/s Average min max 00:32:17.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1488.38 372.10 90005.00 62024.23 329473.74 00:32:17.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.46 139.36 247309.59 149030.17 571920.87 00:32:17.048 ======================================================== 00:32:17.048 Total : 2045.84 511.46 132867.81 62024.23 571920.87 00:32:17.048 00:32:17.308 15:36:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:17.567 No valid NVMe controllers or AIO or URING devices found 00:32:17.567 Initializing NVMe Controllers 00:32:17.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.567 Controller IO queue size 128, less than required. 00:32:17.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.567 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:17.567 Controller IO queue size 128, less than required. 00:32:17.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.567 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:17.567 WARNING: Some requested NVMe devices were skipped 00:32:17.567 15:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:20.856 Initializing NVMe Controllers 00:32:20.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.856 Controller IO queue size 128, less than required. 00:32:20.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:20.856 Controller IO queue size 128, less than required. 00:32:20.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:20.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:20.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:20.856 Initialization complete. Launching workers. 00:32:20.856 00:32:20.856 ==================== 00:32:20.856 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:20.856 TCP transport: 00:32:20.856 polls: 10425 00:32:20.856 idle_polls: 7544 00:32:20.856 sock_completions: 2881 00:32:20.856 nvme_completions: 5453 00:32:20.856 submitted_requests: 8180 00:32:20.856 queued_requests: 1 00:32:20.856 00:32:20.856 ==================== 00:32:20.856 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:20.856 TCP transport: 00:32:20.856 polls: 11872 00:32:20.856 idle_polls: 8854 00:32:20.856 sock_completions: 3018 00:32:20.856 nvme_completions: 5359 00:32:20.856 submitted_requests: 7968 00:32:20.856 queued_requests: 1 00:32:20.856 ======================================================== 00:32:20.856 Latency(us) 00:32:20.856 Device Information : IOPS MiB/s Average min max 00:32:20.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1362.07 340.52 98988.75 61060.02 324491.78 00:32:20.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1338.59 334.65 99971.23 56956.70 507380.49 00:32:20.857 ======================================================== 00:32:20.857 Total : 2700.66 675.16 99475.72 56956.70 507380.49 00:32:20.857 00:32:20.857 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:20.857 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:20.857 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:20.857 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:32:20.857 15:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=94cf136f-c204-4a4b-8441-7b676e390885 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 94cf136f-c204-4a4b-8441-7b676e390885 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=94cf136f-c204-4a4b-8441-7b676e390885 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:26.130 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:26.388 { 00:32:26.388 "uuid": "94cf136f-c204-4a4b-8441-7b676e390885", 00:32:26.388 "name": "lvs_0", 00:32:26.388 "base_bdev": "Nvme0n1", 00:32:26.388 "total_data_clusters": 381173, 00:32:26.388 "free_clusters": 381173, 00:32:26.388 "block_size": 512, 00:32:26.388 "cluster_size": 4194304 00:32:26.388 } 00:32:26.388 ]' 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="94cf136f-c204-4a4b-8441-7b676e390885") .free_clusters' 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=381173 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="94cf136f-c204-4a4b-8441-7b676e390885") .cluster_size' 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=1524692 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 1524692 00:32:26.388 1524692 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:26.388 15:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94cf136f-c204-4a4b-8441-7b676e390885 lbd_0 20480 00:32:26.648 15:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=28477328-8bdd-4c75-b198-713a6b7996c5 00:32:26.648 15:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 28477328-8bdd-4c75-b198-713a6b7996c5 lvs_n_0 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=33048800-5fe2-4ec8-8a2d-eb3e724d927f 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 33048800-5fe2-4ec8-8a2d-eb3e724d927f 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=33048800-5fe2-4ec8-8a2d-eb3e724d927f 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:32:28.026 { 00:32:28.026 "uuid": "94cf136f-c204-4a4b-8441-7b676e390885", 00:32:28.026 "name": "lvs_0", 00:32:28.026 "base_bdev": "Nvme0n1", 00:32:28.026 "total_data_clusters": 381173, 00:32:28.026 "free_clusters": 376053, 00:32:28.026 "block_size": 512, 00:32:28.026 "cluster_size": 4194304 00:32:28.026 }, 00:32:28.026 { 00:32:28.026 "uuid": "33048800-5fe2-4ec8-8a2d-eb3e724d927f", 00:32:28.026 "name": "lvs_n_0", 00:32:28.026 "base_bdev": "28477328-8bdd-4c75-b198-713a6b7996c5", 00:32:28.026 "total_data_clusters": 5114, 00:32:28.026 "free_clusters": 5114, 00:32:28.026 "block_size": 512, 00:32:28.026 "cluster_size": 4194304 00:32:28.026 } 00:32:28.026 ]' 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="33048800-5fe2-4ec8-8a2d-eb3e724d927f") .free_clusters' 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:32:28.026 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="33048800-5fe2-4ec8-8a2d-eb3e724d927f") .cluster_size' 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:32:28.284 20456 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33048800-5fe2-4ec8-8a2d-eb3e724d927f lbd_nest_0 20456 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e6680353-ceaa-4d0a-8585-effec70eba87 00:32:28.284 15:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:28.544 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:28.544 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e6680353-ceaa-4d0a-8585-effec70eba87 00:32:28.802 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.061 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:29.061 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:29.061 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:29.061 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:29.061 15:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:41.272 Initializing NVMe Controllers 00:32:41.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:41.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:41.272 Initialization complete. Launching workers. 00:32:41.272 ======================================================== 00:32:41.272 Latency(us) 00:32:41.272 Device Information : IOPS MiB/s Average min max 00:32:41.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.58 0.02 21469.38 148.90 45664.39 00:32:41.272 ======================================================== 00:32:41.272 Total : 46.58 0.02 21469.38 148.90 45664.39 00:32:41.272 00:32:41.272 15:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:41.272 15:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:51.246 Initializing NVMe Controllers 00:32:51.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:51.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:51.246 Initialization complete. Launching workers. 00:32:51.246 ======================================================== 00:32:51.246 Latency(us) 00:32:51.246 Device Information : IOPS MiB/s Average min max 00:32:51.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.00 9.12 13697.63 5048.12 55869.50 00:32:51.246 ======================================================== 00:32:51.246 Total : 73.00 9.12 13697.63 5048.12 55869.50 00:32:51.246 00:32:51.246 15:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:51.246 15:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:51.246 15:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:01.223 Initializing NVMe Controllers 00:33:01.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:01.223 Initialization complete. Launching workers. 00:33:01.223 ======================================================== 00:33:01.223 Latency(us) 00:33:01.224 Device Information : IOPS MiB/s Average min max 00:33:01.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8201.67 4.00 3902.01 310.76 9336.17 00:33:01.224 ======================================================== 00:33:01.224 Total : 8201.67 4.00 3902.01 310.76 9336.17 00:33:01.224 00:33:01.224 15:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:01.224 15:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:11.209 Initializing NVMe Controllers 00:33:11.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:11.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:11.209 Initialization complete. Launching workers. 00:33:11.209 ======================================================== 00:33:11.209 Latency(us) 00:33:11.210 Device Information : IOPS MiB/s Average min max 00:33:11.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3933.77 491.72 8137.70 743.71 28194.23 00:33:11.210 ======================================================== 00:33:11.210 Total : 3933.77 491.72 8137.70 743.71 28194.23 00:33:11.210 00:33:11.210 15:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:11.210 15:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:11.210 15:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:21.192 Initializing NVMe Controllers 00:33:21.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:21.192 Controller IO queue size 128, less than required. 00:33:21.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:21.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:21.192 Initialization complete. Launching workers. 00:33:21.192 ======================================================== 00:33:21.192 Latency(us) 00:33:21.192 Device Information : IOPS MiB/s Average min max 00:33:21.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12836.92 6.27 9972.13 1620.05 23047.49 00:33:21.192 ======================================================== 00:33:21.192 Total : 12836.92 6.27 9972.13 1620.05 23047.49 00:33:21.192 00:33:21.192 15:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:21.192 15:37:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:33.404 Initializing NVMe Controllers 00:33:33.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.404 Controller IO queue size 128, less than required. 00:33:33.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:33.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:33.404 Initialization complete. Launching workers. 00:33:33.404 ======================================================== 00:33:33.404 Latency(us) 00:33:33.404 Device Information : IOPS MiB/s Average min max 00:33:33.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1208.10 151.01 106389.72 15416.73 228851.76 00:33:33.404 ======================================================== 00:33:33.404 Total : 1208.10 151.01 106389.72 15416.73 228851.76 00:33:33.404 00:33:33.404 15:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.404 15:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6680353-ceaa-4d0a-8585-effec70eba87 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28477328-8bdd-4c75-b198-713a6b7996c5 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.404 rmmod nvme_tcp 00:33:33.404 rmmod nvme_fabrics 00:33:33.404 rmmod nvme_keyring 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4012180 ']' 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4012180 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 4012180 ']' 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 4012180 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4012180 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4012180' 00:33:33.404 killing process with pid 4012180 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 4012180 00:33:33.404 15:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 4012180 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.789 15:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.696 00:33:38.696 real 1m40.421s 00:33:38.696 user 6m0.317s 00:33:38.696 sys 0m16.990s 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:38.696 ************************************ 00:33:38.696 END TEST nvmf_perf 00:33:38.696 ************************************ 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:38.696 15:38:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.696 ************************************ 00:33:38.696 START TEST nvmf_fio_host 00:33:38.696 ************************************ 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:38.696 * Looking for test storage... 00:33:38.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:38.696 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.697 --rc genhtml_branch_coverage=1 00:33:38.697 --rc genhtml_function_coverage=1 00:33:38.697 --rc genhtml_legend=1 00:33:38.697 --rc geninfo_all_blocks=1 00:33:38.697 --rc geninfo_unexecuted_blocks=1 00:33:38.697 00:33:38.697 ' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.697 --rc genhtml_branch_coverage=1 00:33:38.697 --rc genhtml_function_coverage=1 00:33:38.697 --rc genhtml_legend=1 00:33:38.697 --rc geninfo_all_blocks=1 00:33:38.697 --rc geninfo_unexecuted_blocks=1 00:33:38.697 00:33:38.697 ' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.697 --rc genhtml_branch_coverage=1 00:33:38.697 --rc genhtml_function_coverage=1 00:33:38.697 --rc genhtml_legend=1 00:33:38.697 --rc geninfo_all_blocks=1 00:33:38.697 --rc geninfo_unexecuted_blocks=1 00:33:38.697 00:33:38.697 ' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:38.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.697 --rc genhtml_branch_coverage=1 00:33:38.697 --rc genhtml_function_coverage=1 00:33:38.697 --rc genhtml_legend=1 00:33:38.697 --rc geninfo_all_blocks=1 00:33:38.697 --rc geninfo_unexecuted_blocks=1 00:33:38.697 00:33:38.697 ' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.697 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.698 15:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:45.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:45.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:45.269 Found net devices under 0000:86:00.0: cvl_0_0 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.269 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:45.270 Found net devices under 0000:86:00.1: cvl_0_1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.270 15:38:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:33:45.270 00:33:45.270 --- 10.0.0.2 ping statistics --- 00:33:45.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.270 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:33:45.270 00:33:45.270 --- 10.0.0.1 ping statistics --- 00:33:45.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.270 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4030193 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4030193 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 4030193 ']' 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:45.270 15:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.270 [2024-11-06 15:38:12.232950] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:33:45.270 [2024-11-06 15:38:12.233034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.270 [2024-11-06 15:38:12.360834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:45.270 [2024-11-06 15:38:12.469926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.270 [2024-11-06 15:38:12.469971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.270 [2024-11-06 15:38:12.469982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.270 [2024-11-06 15:38:12.470008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.270 [2024-11-06 15:38:12.470017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.270 [2024-11-06 15:38:12.472616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.270 [2024-11-06 15:38:12.472696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.270 [2024-11-06 15:38:12.472771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.270 [2024-11-06 15:38:12.472792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:45.529 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:45.529 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:33:45.529 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:45.788 [2024-11-06 15:38:13.206607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.788 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:45.788 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.788 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.788 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:46.047 Malloc1 00:33:46.047 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:46.305 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:46.564 15:38:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.564 [2024-11-06 15:38:14.148043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.564 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:46.824 15:38:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:47.391 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:47.391 fio-3.35 00:33:47.391 Starting 1 thread 00:33:49.925 00:33:49.925 test: (groupid=0, jobs=1): err= 0: pid=4030793: Wed Nov 6 15:38:17 2024 00:33:49.925 read: IOPS=9923, BW=38.8MiB/s (40.6MB/s)(77.8MiB/2006msec) 00:33:49.925 slat (nsec): min=1739, max=190521, avg=1983.92, stdev=1882.68 00:33:49.925 clat (usec): min=2640, max=12603, avg=7067.24, stdev=550.58 00:33:49.925 lat (usec): min=2676, max=12605, avg=7069.23, stdev=550.42 00:33:49.925 clat percentiles (usec): 00:33:49.925 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:33:49.925 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7177], 00:33:49.925 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:33:49.925 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11731], 00:33:49.925 | 99.99th=[12518] 00:33:49.925 bw ( KiB/s): min=38650, max=40208, per=99.89%, avg=39650.50, stdev=736.00, samples=4 00:33:49.925 iops : min= 9662, max=10052, avg=9912.50, stdev=184.23, samples=4 00:33:49.925 write: IOPS=9946, BW=38.9MiB/s (40.7MB/s)(77.9MiB/2006msec); 0 zone resets 00:33:49.925 slat (nsec): min=1779, max=169152, avg=2020.27, stdev=1370.97 00:33:49.925 clat (usec): min=1957, max=10936, avg=5745.60, stdev=450.88 00:33:49.925 lat (usec): min=1977, max=10938, avg=5747.62, stdev=450.78 00:33:49.925 clat percentiles (usec): 00:33:49.925 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:33:49.925 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:33:49.925 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6456], 00:33:49.925 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 8848], 99.95th=[10159], 00:33:49.925 | 99.99th=[10421] 00:33:49.925 bw ( KiB/s): min=39041, max=40448, per=99.97%, avg=39776.25, stdev=575.57, samples=4 00:33:49.925 iops : min= 9760, max=10112, avg=9944.00, stdev=144.00, samples=4 00:33:49.925 lat (msec) : 2=0.01%, 4=0.14%, 10=99.76%, 20=0.10% 00:33:49.925 cpu : usr=76.56%, sys=22.29%, ctx=66, majf=0, minf=1507 00:33:49.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:49.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.925 issued rwts: total=19907,19953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.925 00:33:49.925 Run status group 0 (all jobs): 00:33:49.925 READ: bw=38.8MiB/s (40.6MB/s), 38.8MiB/s-38.8MiB/s (40.6MB/s-40.6MB/s), io=77.8MiB (81.5MB), run=2006-2006msec 00:33:49.925 WRITE: bw=38.9MiB/s (40.7MB/s), 38.9MiB/s-38.9MiB/s (40.7MB/s-40.7MB/s), io=77.9MiB (81.7MB), run=2006-2006msec 00:33:49.925 ----------------------------------------------------- 00:33:49.925 Suppressions used: 00:33:49.925 count bytes template 00:33:49.925 1 57 /usr/src/fio/parse.c 00:33:49.925 1 8 libtcmalloc_minimal.so 00:33:49.925 ----------------------------------------------------- 00:33:49.925 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:50.200 15:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:50.460 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:50.460 fio-3.35 00:33:50.460 Starting 1 thread 00:33:52.991 00:33:52.991 test: (groupid=0, jobs=1): err= 0: pid=4031434: Wed Nov 6 15:38:20 2024 00:33:52.991 read: IOPS=9440, BW=148MiB/s (155MB/s)(296MiB/2007msec) 00:33:52.991 slat (nsec): min=2639, max=96591, avg=3184.02, stdev=1464.52 00:33:52.991 clat (usec): min=2182, max=15653, avg=7802.14, stdev=1791.24 00:33:52.991 lat (usec): min=2185, max=15656, avg=7805.33, stdev=1791.29 00:33:52.991 clat percentiles (usec): 00:33:52.991 | 1.00th=[ 4047], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6194], 00:33:52.991 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 8225], 00:33:52.991 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10814], 00:33:52.991 | 99.00th=[11994], 99.50th=[12780], 99.90th=[13829], 99.95th=[14091], 00:33:52.991 | 99.99th=[14484] 00:33:52.991 bw ( KiB/s): min=70944, max=83488, per=49.71%, avg=75088.00, stdev=5686.07, samples=4 00:33:52.991 iops : min= 4434, max= 5218, avg=4693.00, stdev=355.38, samples=4 00:33:52.991 write: IOPS=5372, BW=83.9MiB/s (88.0MB/s)(153MiB/1827msec); 0 zone resets 00:33:52.991 slat (usec): min=27, max=290, avg=32.36, stdev= 5.73 00:33:52.991 clat (usec): min=3017, max=18404, avg=10151.76, stdev=1829.01 00:33:52.991 lat (usec): min=3047, max=18434, avg=10184.13, stdev=1829.11 00:33:52.991 clat percentiles (usec): 00:33:52.991 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8586], 00:33:52.991 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:33:52.991 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12780], 95.00th=[13304], 00:33:52.991 | 99.00th=[14615], 99.50th=[15664], 99.90th=[17433], 99.95th=[17957], 00:33:52.991 | 99.99th=[18482] 00:33:52.991 bw ( KiB/s): min=71840, max=86656, per=90.62%, avg=77904.00, stdev=6245.19, samples=4 00:33:52.991 iops : min= 4490, max= 5416, avg=4869.00, stdev=390.32, samples=4 00:33:52.991 lat (msec) : 4=0.62%, 10=75.19%, 20=24.19% 00:33:52.991 cpu : usr=87.05%, sys=12.26%, ctx=40, majf=0, minf=2406 00:33:52.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:52.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.991 issued rwts: total=18948,9816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.991 00:33:52.991 Run status group 0 (all jobs): 00:33:52.991 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=296MiB (310MB), run=2007-2007msec 00:33:52.991 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=153MiB (161MB), run=1827-1827msec 00:33:52.991 ----------------------------------------------------- 00:33:52.991 Suppressions used: 00:33:52.991 count bytes template 00:33:52.991 1 57 /usr/src/fio/parse.c 00:33:52.991 303 29088 /usr/src/fio/iolog.c 00:33:52.991 1 8 libtcmalloc_minimal.so 00:33:52.991 ----------------------------------------------------- 00:33:52.991 00:33:52.991 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:33:53.250 15:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:56.539 Nvme0n1 00:33:56.539 15:38:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=eb9fcded-af83-4540-943c-dc768d075263 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb eb9fcded-af83-4540-943c-dc768d075263 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=eb9fcded-af83-4540-943c-dc768d075263 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:34:01.810 { 00:34:01.810 "uuid": "eb9fcded-af83-4540-943c-dc768d075263", 00:34:01.810 "name": "lvs_0", 00:34:01.810 "base_bdev": "Nvme0n1", 00:34:01.810 "total_data_clusters": 1489, 00:34:01.810 "free_clusters": 1489, 00:34:01.810 "block_size": 512, 00:34:01.810 "cluster_size": 1073741824 00:34:01.810 } 00:34:01.810 ]' 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="eb9fcded-af83-4540-943c-dc768d075263") .free_clusters' 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1489 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="eb9fcded-af83-4540-943c-dc768d075263") .cluster_size' 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1524736 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1524736 00:34:01.810 1524736 00:34:01.810 15:38:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:34:01.810 41781355-d75b-4a94-83d2-95cf2550e67a 00:34:01.810 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:01.810 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:34:02.069 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:02.349 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:02.349 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:02.349 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:34:02.349 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:02.349 15:38:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:02.607 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:02.607 fio-3.35 00:34:02.607 Starting 1 thread 00:34:05.130 00:34:05.130 test: (groupid=0, jobs=1): err= 0: pid=4033532: Wed Nov 6 15:38:32 2024 00:34:05.130 read: IOPS=6479, BW=25.3MiB/s (26.5MB/s)(50.8MiB/2007msec) 00:34:05.130 slat (nsec): min=1723, max=117934, avg=1938.76, stdev=1344.39 00:34:05.130 clat (usec): min=368, max=270371, avg=10599.92, stdev=17421.06 00:34:05.130 lat (usec): min=370, max=270375, avg=10601.86, stdev=17421.13 00:34:05.130 clat percentiles (msec): 00:34:05.130 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:34:05.130 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:34:05.130 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:34:05.130 | 99.00th=[ 12], 99.50th=[ 15], 99.90th=[ 271], 99.95th=[ 271], 00:34:05.130 | 99.99th=[ 271] 00:34:05.130 bw ( KiB/s): min=13384, max=30136, per=99.85%, avg=25878.00, stdev=8330.38, samples=4 00:34:05.130 iops : min= 3346, max= 7534, avg=6469.50, stdev=2082.59, samples=4 00:34:05.130 write: IOPS=6485, BW=25.3MiB/s (26.6MB/s)(50.8MiB/2007msec); 0 zone resets 00:34:05.130 slat (nsec): min=1777, max=99618, avg=2016.53, stdev=980.41 00:34:05.130 clat (usec): min=291, max=268834, avg=8988.03, stdev=18586.38 00:34:05.130 lat (usec): min=293, max=268840, avg=8990.04, stdev=18586.55 00:34:05.130 clat percentiles (msec): 00:34:05.130 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:34:05.130 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:34:05.130 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:34:05.130 | 99.00th=[ 10], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 271], 00:34:05.130 | 99.99th=[ 271] 00:34:05.130 bw ( KiB/s): min=14032, max=29952, per=99.87%, avg=25910.00, stdev=7919.08, samples=4 00:34:05.130 iops : min= 3508, max= 7488, avg=6477.50, stdev=1979.77, samples=4 00:34:05.130 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:34:05.130 lat (msec) : 2=0.09%, 4=0.17%, 10=87.09%, 20=12.11%, 500=0.49% 00:34:05.130 cpu : usr=74.03%, sys=25.12%, ctx=108, majf=0, minf=1509 00:34:05.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:05.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:05.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:05.130 issued rwts: total=13004,13017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:05.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:05.130 00:34:05.130 Run status group 0 (all jobs): 00:34:05.130 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=50.8MiB (53.3MB), run=2007-2007msec 00:34:05.130 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=50.8MiB (53.3MB), run=2007-2007msec 00:34:05.130 ----------------------------------------------------- 00:34:05.131 Suppressions used: 00:34:05.131 count bytes template 00:34:05.131 1 58 /usr/src/fio/parse.c 00:34:05.131 1 8 libtcmalloc_minimal.so 00:34:05.131 ----------------------------------------------------- 00:34:05.131 00:34:05.131 15:38:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:05.388 15:38:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:06.318 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c106f57f-633a-4d5e-9e96-49bd0bad4814 00:34:06.318 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c106f57f-633a-4d5e-9e96-49bd0bad4814 00:34:06.318 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=c106f57f-633a-4d5e-9e96-49bd0bad4814 00:34:06.318 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:34:06.319 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:34:06.319 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:34:06.319 15:38:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:34:06.576 { 00:34:06.576 "uuid": "eb9fcded-af83-4540-943c-dc768d075263", 00:34:06.576 "name": "lvs_0", 00:34:06.576 "base_bdev": "Nvme0n1", 00:34:06.576 "total_data_clusters": 1489, 00:34:06.576 "free_clusters": 0, 00:34:06.576 "block_size": 512, 00:34:06.576 "cluster_size": 1073741824 00:34:06.576 }, 00:34:06.576 { 00:34:06.576 "uuid": "c106f57f-633a-4d5e-9e96-49bd0bad4814", 00:34:06.576 "name": "lvs_n_0", 00:34:06.576 "base_bdev": "41781355-d75b-4a94-83d2-95cf2550e67a", 00:34:06.576 "total_data_clusters": 380811, 00:34:06.576 "free_clusters": 380811, 00:34:06.576 "block_size": 512, 00:34:06.576 "cluster_size": 4194304 00:34:06.576 } 00:34:06.576 ]' 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="c106f57f-633a-4d5e-9e96-49bd0bad4814") .free_clusters' 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=380811 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="c106f57f-633a-4d5e-9e96-49bd0bad4814") .cluster_size' 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=1523244 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 1523244 00:34:06.576 1523244 00:34:06.576 15:38:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:34:08.472 c2daca48-9d9f-4a4a-a651-9472927e1351 00:34:08.472 15:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:08.472 15:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:08.729 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:09.011 15:38:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:09.274 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:09.274 fio-3.35 00:34:09.274 Starting 1 thread 00:34:11.801 00:34:11.801 test: (groupid=0, jobs=1): err= 0: pid=4034804: Wed Nov 6 15:38:39 2024 00:34:11.801 read: IOPS=6729, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec) 00:34:11.801 slat (nsec): min=1742, max=103891, avg=1981.00, stdev=1240.47 00:34:11.801 clat (usec): min=3624, max=17377, avg=10430.94, stdev=899.80 00:34:11.801 lat (usec): min=3644, max=17379, avg=10432.92, stdev=899.71 00:34:11.801 clat percentiles (usec): 00:34:11.801 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:34:11.801 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:11.801 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:34:11.801 | 99.00th=[12387], 99.50th=[12518], 99.90th=[14877], 99.95th=[15926], 00:34:11.801 | 99.99th=[17433] 00:34:11.801 bw ( KiB/s): min=25792, max=27464, per=99.88%, avg=26884.00, stdev=754.68, samples=4 00:34:11.801 iops : min= 6448, max= 6866, avg=6721.00, stdev=188.67, samples=4 00:34:11.801 write: IOPS=6734, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec); 0 zone resets 00:34:11.801 slat (nsec): min=1787, max=315396, avg=2035.58, stdev=2744.07 00:34:11.801 clat (usec): min=1727, max=14664, avg=8452.39, stdev=749.17 00:34:11.801 lat (usec): min=1738, max=14666, avg=8454.43, stdev=749.16 00:34:11.801 clat percentiles (usec): 00:34:11.801 | 1.00th=[ 6718], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:34:11.801 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:34:11.801 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9634], 00:34:11.801 | 99.00th=[10159], 99.50th=[10290], 99.90th=[13566], 99.95th=[13829], 00:34:11.801 | 99.99th=[14615] 00:34:11.801 bw ( KiB/s): min=26712, max=27200, per=99.97%, avg=26930.00, stdev=202.43, samples=4 00:34:11.801 iops : min= 6678, max= 6800, avg=6732.50, stdev=50.61, samples=4 00:34:11.801 lat (msec) : 2=0.01%, 4=0.08%, 10=64.30%, 20=35.62% 00:34:11.801 cpu : usr=73.89%, sys=25.11%, ctx=141, majf=0, minf=1507 00:34:11.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:11.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:11.801 issued rwts: total=13512,13523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:11.801 00:34:11.801 Run status group 0 (all jobs): 00:34:11.801 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.3MB), run=2008-2008msec 00:34:11.801 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:34:12.059 ----------------------------------------------------- 00:34:12.059 Suppressions used: 00:34:12.059 count bytes template 00:34:12.059 1 58 /usr/src/fio/parse.c 00:34:12.059 1 8 libtcmalloc_minimal.so 00:34:12.059 ----------------------------------------------------- 00:34:12.059 00:34:12.059 15:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:12.317 15:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:12.318 15:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:18.886 15:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:19.150 15:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:24.424 15:38:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:24.424 15:38:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:26.959 rmmod nvme_tcp 00:34:26.959 rmmod nvme_fabrics 00:34:26.959 rmmod nvme_keyring 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4030193 ']' 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4030193 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 4030193 ']' 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 4030193 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4030193 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4030193' 00:34:26.959 killing process with pid 4030193 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 4030193 00:34:26.959 15:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 4030193 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.897 15:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.433 00:34:30.433 real 0m51.539s 00:34:30.433 user 3m24.425s 00:34:30.433 sys 0m11.337s 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.433 ************************************ 00:34:30.433 END TEST nvmf_fio_host 00:34:30.433 ************************************ 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.433 ************************************ 00:34:30.433 START TEST nvmf_failover 00:34:30.433 ************************************ 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:30.433 * Looking for test storage... 00:34:30.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:30.433 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.434 --rc genhtml_branch_coverage=1 00:34:30.434 --rc genhtml_function_coverage=1 00:34:30.434 --rc genhtml_legend=1 00:34:30.434 --rc geninfo_all_blocks=1 00:34:30.434 --rc geninfo_unexecuted_blocks=1 00:34:30.434 00:34:30.434 ' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.434 --rc genhtml_branch_coverage=1 00:34:30.434 --rc genhtml_function_coverage=1 00:34:30.434 --rc genhtml_legend=1 00:34:30.434 --rc geninfo_all_blocks=1 00:34:30.434 --rc geninfo_unexecuted_blocks=1 00:34:30.434 00:34:30.434 ' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.434 --rc genhtml_branch_coverage=1 00:34:30.434 --rc genhtml_function_coverage=1 00:34:30.434 --rc genhtml_legend=1 00:34:30.434 --rc geninfo_all_blocks=1 00:34:30.434 --rc geninfo_unexecuted_blocks=1 00:34:30.434 00:34:30.434 ' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:30.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.434 --rc genhtml_branch_coverage=1 00:34:30.434 --rc genhtml_function_coverage=1 00:34:30.434 --rc genhtml_legend=1 00:34:30.434 --rc geninfo_all_blocks=1 00:34:30.434 --rc geninfo_unexecuted_blocks=1 00:34:30.434 00:34:30.434 ' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.434 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.435 15:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:37.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:37.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:37.006 Found net devices under 0000:86:00.0: cvl_0_0 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:37.006 Found net devices under 0000:86:00.1: cvl_0_1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:34:37.006 00:34:37.006 --- 10.0.0.2 ping statistics --- 00:34:37.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.006 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:34:37.006 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:37.006 00:34:37.006 --- 10.0.0.1 ping statistics --- 00:34:37.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.006 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4041204 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4041204 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4041204 ']' 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:37.007 15:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:37.007 [2024-11-06 15:39:03.821229] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:37.007 [2024-11-06 15:39:03.821316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.007 [2024-11-06 15:39:03.951169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:37.007 [2024-11-06 15:39:04.056121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.007 [2024-11-06 15:39:04.056166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.007 [2024-11-06 15:39:04.056176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.007 [2024-11-06 15:39:04.056188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.007 [2024-11-06 15:39:04.056196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.007 [2024-11-06 15:39:04.058576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:37.007 [2024-11-06 15:39:04.058670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.007 [2024-11-06 15:39:04.058677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:37.007 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:37.007 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:37.007 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.007 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:37.266 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:37.266 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.266 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:37.266 [2024-11-06 15:39:04.841413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.266 15:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:37.524 Malloc0 00:34:37.524 15:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:37.783 15:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.042 15:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.301 [2024-11-06 15:39:05.702118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.301 15:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:38.301 [2024-11-06 15:39:05.910787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:38.562 15:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:38.562 [2024-11-06 15:39:06.123494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4041683 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4041683 /var/tmp/bdevperf.sock 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4041683 ']' 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:38.562 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:38.563 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:38.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:38.563 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:38.563 15:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:39.574 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:39.574 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:39.574 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:39.833 NVMe0n1 00:34:39.833 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:40.091 00:34:40.091 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4041928 00:34:40.091 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:40.091 15:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:41.469 15:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.469 [2024-11-06 15:39:08.889456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:41.469 [2024-11-06 15:39:08.889515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:41.469 [2024-11-06 15:39:08.889526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:34:41.469 15:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:44.752 15:39:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:44.752 00:34:44.752 15:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:45.011 [2024-11-06 15:39:12.561766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:34:45.011 15:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:48.300 15:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:48.300 [2024-11-06 15:39:15.772483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.300 15:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:49.237 15:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:49.518 15:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4041928 00:34:56.085 { 00:34:56.085 "results": [ 00:34:56.085 { 00:34:56.085 "job": "NVMe0n1", 00:34:56.085 "core_mask": "0x1", 00:34:56.085 "workload": "verify", 00:34:56.085 "status": "finished", 00:34:56.085 "verify_range": { 00:34:56.085 "start": 0, 00:34:56.085 "length": 16384 00:34:56.085 }, 00:34:56.085 "queue_depth": 128, 00:34:56.085 "io_size": 4096, 00:34:56.085 "runtime": 15.00766, 00:34:56.085 "iops": 9646.940295822267, 00:34:56.085 "mibps": 37.68336053055573, 00:34:56.085 "io_failed": 9549, 00:34:56.085 "io_timeout": 0, 00:34:56.085 "avg_latency_us": 12423.233630951225, 00:34:56.085 "min_latency_us": 475.9161904761905, 00:34:56.085 "max_latency_us": 19598.384761904763 00:34:56.085 } 00:34:56.085 ], 00:34:56.085 "core_count": 1 00:34:56.085 } 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4041683 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4041683 ']' 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4041683 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4041683 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4041683' 00:34:56.085 killing process with pid 4041683 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4041683 00:34:56.085 15:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4041683 00:34:56.352 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:56.352 [2024-11-06 15:39:06.228447] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:34:56.352 [2024-11-06 15:39:06.228556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041683 ] 00:34:56.352 [2024-11-06 15:39:06.352834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.352 [2024-11-06 15:39:06.471280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.352 Running I/O for 15 seconds... 00:34:56.352 9638.00 IOPS, 37.65 MiB/s [2024-11-06T14:39:23.990Z] [2024-11-06 15:39:08.892083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.352 [2024-11-06 15:39:08.892636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.352 [2024-11-06 15:39:08.892646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.892980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.892990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.353 [2024-11-06 15:39:08.893220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.353 [2024-11-06 15:39:08.893467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.353 [2024-11-06 15:39:08.893479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.354 [2024-11-06 15:39:08.893891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.893950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.893965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.893974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.893983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.893993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86168 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86176 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86184 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86192 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86208 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86216 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.354 [2024-11-06 15:39:08.894325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.354 [2024-11-06 15:39:08.894337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.354 [2024-11-06 15:39:08.894346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86224 len:8 PRP1 0x0 PRP2 0x0 00:34:56.354 [2024-11-06 15:39:08.894354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86232 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86240 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86248 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86256 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86264 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86272 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86280 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86304 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86312 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86320 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86328 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86336 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86344 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86352 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86368 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.894968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.894978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.894985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.894993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86376 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.895002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.895011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.895018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.895025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86384 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.895035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.895046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.895053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.895061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86392 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.895069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.895078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.895086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.895094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86400 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.895103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.895113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.895120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.355 [2024-11-06 15:39:08.895127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86408 len:8 PRP1 0x0 PRP2 0x0 00:34:56.355 [2024-11-06 15:39:08.895136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.355 [2024-11-06 15:39:08.895145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.355 [2024-11-06 15:39:08.895152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85432 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85440 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85448 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85456 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85464 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85472 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.895362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.895384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.895390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.905777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.905800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.905813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.905823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.905835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86424 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.905847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.905860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.905871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.905893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.905906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.356 [2024-11-06 15:39:08.905915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.356 [2024-11-06 15:39:08.905926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86440 len:8 PRP1 0x0 PRP2 0x0 00:34:56.356 [2024-11-06 15:39:08.905939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.906381] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:56.356 [2024-11-06 15:39:08.906428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.356 [2024-11-06 15:39:08.906445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.906460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.356 [2024-11-06 15:39:08.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.906488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.356 [2024-11-06 15:39:08.906502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.906517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.356 [2024-11-06 15:39:08.906533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:08.906547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:56.356 [2024-11-06 15:39:08.906611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d780 (9): Bad file descriptor 00:34:56.356 [2024-11-06 15:39:08.910691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:56.356 [2024-11-06 15:39:08.934264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:56.356 9583.50 IOPS, 37.44 MiB/s [2024-11-06T14:39:23.994Z] 9656.00 IOPS, 37.72 MiB/s [2024-11-06T14:39:23.994Z] 9697.00 IOPS, 37.88 MiB/s [2024-11-06T14:39:23.994Z] [2024-11-06 15:39:12.562353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.356 [2024-11-06 15:39:12.562545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.356 [2024-11-06 15:39:12.562556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.357 [2024-11-06 15:39:12.562903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.562924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.562948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.562968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.357 [2024-11-06 15:39:12.563406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.357 [2024-11-06 15:39:12.563416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.563986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.563997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.358 [2024-11-06 15:39:12.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.358 [2024-11-06 15:39:12.564255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.359 [2024-11-06 15:39:12.564593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118400 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118408 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118416 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118424 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118432 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118440 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118448 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118456 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118464 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118472 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.564971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.564978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.564986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118480 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.564995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.565004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.565011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.565019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118488 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.565028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.565036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.565043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.565051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118496 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.565060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.565070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.359 [2024-11-06 15:39:12.565078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.359 [2024-11-06 15:39:12.565086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118504 len:8 PRP1 0x0 PRP2 0x0 00:34:56.359 [2024-11-06 15:39:12.565096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.359 [2024-11-06 15:39:12.565105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118512 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118520 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118528 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118536 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118544 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118552 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118560 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118568 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118576 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117744 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.360 [2024-11-06 15:39:12.565478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.360 [2024-11-06 15:39:12.565485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117752 len:8 PRP1 0x0 PRP2 0x0 00:34:56.360 [2024-11-06 15:39:12.565494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565820] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:56.360 [2024-11-06 15:39:12.565853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.360 [2024-11-06 15:39:12.565865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.360 [2024-11-06 15:39:12.565885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.360 [2024-11-06 15:39:12.565905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.360 [2024-11-06 15:39:12.565923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:12.565933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:56.360 [2024-11-06 15:39:12.565976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d780 (9): Bad file descriptor 00:34:56.360 [2024-11-06 15:39:12.568973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:56.360 [2024-11-06 15:39:12.728321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:56.360 9401.20 IOPS, 36.72 MiB/s [2024-11-06T14:39:23.998Z] 9491.67 IOPS, 37.08 MiB/s [2024-11-06T14:39:23.998Z] 9538.43 IOPS, 37.26 MiB/s [2024-11-06T14:39:23.998Z] 9586.12 IOPS, 37.45 MiB/s [2024-11-06T14:39:23.998Z] 9619.33 IOPS, 37.58 MiB/s [2024-11-06T14:39:23.998Z] [2024-11-06 15:39:16.988624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.360 [2024-11-06 15:39:16.988899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.360 [2024-11-06 15:39:16.988923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.360 [2024-11-06 15:39:16.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.360 [2024-11-06 15:39:16.988971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.360 [2024-11-06 15:39:16.988984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.360 [2024-11-06 15:39:16.988993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.361 [2024-11-06 15:39:16.989709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.361 [2024-11-06 15:39:16.989719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.989979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.989990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.362 [2024-11-06 15:39:16.990560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.362 [2024-11-06 15:39:16.990569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.363 [2024-11-06 15:39:16.990961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.990996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118304 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118312 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118320 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118328 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118336 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118344 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118352 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118360 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118368 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118376 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118384 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.363 [2024-11-06 15:39:16.991393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.363 [2024-11-06 15:39:16.991401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118392 len:8 PRP1 0x0 PRP2 0x0 00:34:56.363 [2024-11-06 15:39:16.991412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.363 [2024-11-06 15:39:16.991422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118400 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118408 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118416 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118424 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118432 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118440 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118448 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118456 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.991685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:56.364 [2024-11-06 15:39:16.991692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:56.364 [2024-11-06 15:39:16.991700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118464 len:8 PRP1 0x0 PRP2 0x0 00:34:56.364 [2024-11-06 15:39:16.991708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.992041] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:56.364 [2024-11-06 15:39:16.992073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.364 [2024-11-06 15:39:16.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.992095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.364 [2024-11-06 15:39:16.992105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.992115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.364 [2024-11-06 15:39:16.992124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.992134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.364 [2024-11-06 15:39:16.992143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.364 [2024-11-06 15:39:16.992153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:56.364 [2024-11-06 15:39:16.992187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d780 (9): Bad file descriptor 00:34:56.364 [2024-11-06 15:39:16.995181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:56.364 [2024-11-06 15:39:17.021668] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:56.364 9613.60 IOPS, 37.55 MiB/s [2024-11-06T14:39:24.002Z] 9620.55 IOPS, 37.58 MiB/s [2024-11-06T14:39:24.002Z] 9633.00 IOPS, 37.63 MiB/s [2024-11-06T14:39:24.002Z] 9632.69 IOPS, 37.63 MiB/s [2024-11-06T14:39:24.002Z] 9642.43 IOPS, 37.67 MiB/s [2024-11-06T14:39:24.002Z] 9643.33 IOPS, 37.67 MiB/s 00:34:56.364 Latency(us) 00:34:56.364 [2024-11-06T14:39:24.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.364 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:56.364 Verification LBA range: start 0x0 length 0x4000 00:34:56.364 NVMe0n1 : 15.01 9646.94 37.68 636.28 0.00 12423.23 475.92 19598.38 00:34:56.364 [2024-11-06T14:39:24.002Z] =================================================================================================================== 00:34:56.364 [2024-11-06T14:39:24.002Z] Total : 9646.94 37.68 636.28 0.00 12423.23 475.92 19598.38 00:34:56.364 Received shutdown signal, test time was about 15.000000 seconds 00:34:56.364 00:34:56.364 Latency(us) 00:34:56.364 [2024-11-06T14:39:24.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.364 [2024-11-06T14:39:24.002Z] =================================================================================================================== 00:34:56.364 [2024-11-06T14:39:24.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4044839 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4044839 /var/tmp/bdevperf.sock 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 4044839 ']' 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:56.364 15:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:57.302 15:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:57.302 15:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:34:57.302 15:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:57.302 [2024-11-06 15:39:24.881115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:57.302 15:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:57.561 [2024-11-06 15:39:25.065695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:57.561 15:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:57.819 NVMe0n1 00:34:57.820 15:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:58.078 00:34:58.078 15:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:58.337 00:34:58.337 15:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:58.337 15:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:58.596 15:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:58.855 15:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:02.142 15:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:02.142 15:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:02.142 15:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4045760 00:35:02.142 15:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:02.142 15:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4045760 00:35:03.078 { 00:35:03.078 "results": [ 00:35:03.078 { 00:35:03.078 "job": "NVMe0n1", 00:35:03.078 "core_mask": "0x1", 00:35:03.078 "workload": "verify", 00:35:03.078 "status": "finished", 00:35:03.078 "verify_range": { 00:35:03.078 "start": 0, 00:35:03.078 "length": 16384 00:35:03.078 }, 00:35:03.078 "queue_depth": 128, 00:35:03.078 "io_size": 4096, 00:35:03.078 "runtime": 1.010417, 00:35:03.078 "iops": 9567.337049950664, 00:35:03.078 "mibps": 37.37241035136978, 00:35:03.078 "io_failed": 0, 00:35:03.078 "io_timeout": 0, 00:35:03.078 "avg_latency_us": 13328.61849295837, 00:35:03.078 "min_latency_us": 2871.1009523809525, 00:35:03.078 "max_latency_us": 15541.394285714287 00:35:03.078 } 00:35:03.078 ], 00:35:03.078 "core_count": 1 00:35:03.078 } 00:35:03.078 15:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:03.078 [2024-11-06 15:39:23.875787] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:03.078 [2024-11-06 15:39:23.875885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044839 ] 00:35:03.078 [2024-11-06 15:39:24.000347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.078 [2024-11-06 15:39:24.106171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.078 [2024-11-06 15:39:26.275395] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:03.078 [2024-11-06 15:39:26.275466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.078 [2024-11-06 15:39:26.275486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.078 [2024-11-06 15:39:26.275501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.078 [2024-11-06 15:39:26.275512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.078 [2024-11-06 15:39:26.275523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.078 [2024-11-06 15:39:26.275533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.078 [2024-11-06 15:39:26.275543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.078 [2024-11-06 15:39:26.275560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.078 [2024-11-06 15:39:26.275570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:35:03.078 [2024-11-06 15:39:26.275617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:35:03.078 [2024-11-06 15:39:26.275646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032d780 (9): Bad file descriptor 00:35:03.078 [2024-11-06 15:39:26.279893] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:35:03.078 Running I/O for 1 seconds... 00:35:03.078 9539.00 IOPS, 37.26 MiB/s 00:35:03.078 Latency(us) 00:35:03.078 [2024-11-06T14:39:30.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.078 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:03.078 Verification LBA range: start 0x0 length 0x4000 00:35:03.078 NVMe0n1 : 1.01 9567.34 37.37 0.00 0.00 13328.62 2871.10 15541.39 00:35:03.078 [2024-11-06T14:39:30.716Z] =================================================================================================================== 00:35:03.078 [2024-11-06T14:39:30.716Z] Total : 9567.34 37.37 0.00 0.00 13328.62 2871.10 15541.39 00:35:03.078 15:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:03.078 15:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:03.337 15:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:03.596 15:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:03.596 15:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:03.596 15:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:03.854 15:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4044839 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4044839 ']' 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4044839 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4044839 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4044839' 00:35:07.141 killing process with pid 4044839 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4044839 00:35:07.141 15:39:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4044839 00:35:08.079 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:08.079 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.337 rmmod nvme_tcp 00:35:08.337 rmmod nvme_fabrics 00:35:08.337 rmmod nvme_keyring 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.337 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4041204 ']' 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4041204 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 4041204 ']' 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 4041204 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4041204 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4041204' 00:35:08.338 killing process with pid 4041204 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 4041204 00:35:08.338 15:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 4041204 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.715 15:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.620 15:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:11.880 00:35:11.880 real 0m41.636s 00:35:11.880 user 2m12.846s 00:35:11.880 sys 0m8.226s 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:11.880 ************************************ 00:35:11.880 END TEST nvmf_failover 00:35:11.880 ************************************ 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.880 ************************************ 00:35:11.880 START TEST nvmf_host_discovery 00:35:11.880 ************************************ 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:11.880 * Looking for test storage... 00:35:11.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:11.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.880 --rc genhtml_branch_coverage=1 00:35:11.880 --rc genhtml_function_coverage=1 00:35:11.880 --rc genhtml_legend=1 00:35:11.880 --rc geninfo_all_blocks=1 00:35:11.880 --rc geninfo_unexecuted_blocks=1 00:35:11.880 00:35:11.880 ' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:11.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.880 --rc genhtml_branch_coverage=1 00:35:11.880 --rc genhtml_function_coverage=1 00:35:11.880 --rc genhtml_legend=1 00:35:11.880 --rc geninfo_all_blocks=1 00:35:11.880 --rc geninfo_unexecuted_blocks=1 00:35:11.880 00:35:11.880 ' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:11.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.880 --rc genhtml_branch_coverage=1 00:35:11.880 --rc genhtml_function_coverage=1 00:35:11.880 --rc genhtml_legend=1 00:35:11.880 --rc geninfo_all_blocks=1 00:35:11.880 --rc geninfo_unexecuted_blocks=1 00:35:11.880 00:35:11.880 ' 00:35:11.880 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:11.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.880 --rc genhtml_branch_coverage=1 00:35:11.880 --rc genhtml_function_coverage=1 00:35:11.880 --rc genhtml_legend=1 00:35:11.880 --rc geninfo_all_blocks=1 00:35:11.880 --rc geninfo_unexecuted_blocks=1 00:35:11.880 00:35:11.880 ' 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.881 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:12.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:35:12.144 15:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:18.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:18.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:18.716 Found net devices under 0000:86:00.0: cvl_0_0 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:18.716 Found net devices under 0000:86:00.1: cvl_0_1 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.716 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:35:18.717 00:35:18.717 --- 10.0.0.2 ping statistics --- 00:35:18.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.717 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:35:18.717 00:35:18.717 --- 10.0.0.1 ping statistics --- 00:35:18.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.717 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4050444 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4050444 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4050444 ']' 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:18.717 15:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.717 [2024-11-06 15:39:45.526325] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:18.717 [2024-11-06 15:39:45.526413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.717 [2024-11-06 15:39:45.654074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.717 [2024-11-06 15:39:45.757975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.717 [2024-11-06 15:39:45.758020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.717 [2024-11-06 15:39:45.758030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.717 [2024-11-06 15:39:45.758041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.717 [2024-11-06 15:39:45.758050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.717 [2024-11-06 15:39:45.759523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.717 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:18.717 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:35:18.717 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:18.717 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.717 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 [2024-11-06 15:39:46.363511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 [2024-11-06 15:39:46.375642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 null0 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 null1 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4050566 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4050566 /tmp/host.sock 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 4050566 ']' 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:18.977 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:18.977 15:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 [2024-11-06 15:39:46.481617] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:18.977 [2024-11-06 15:39:46.481705] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050566 ] 00:35:18.977 [2024-11-06 15:39:46.604027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.236 [2024-11-06 15:39:46.712141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.805 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 [2024-11-06 15:39:47.627119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:20.065 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:35:20.324 15:39:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:35:20.892 [2024-11-06 15:39:48.364851] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:20.892 [2024-11-06 15:39:48.364882] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:20.892 [2024-11-06 15:39:48.364908] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:20.892 [2024-11-06 15:39:48.493320] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:21.151 [2024-11-06 15:39:48.595364] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:21.151 [2024-11-06 15:39:48.596660] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500032e680:1 started. 00:35:21.151 [2024-11-06 15:39:48.598406] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:21.151 [2024-11-06 15:39:48.598429] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:21.151 [2024-11-06 15:39:48.604020] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500032e680 was disconnected and freed. delete nvme_qpair. 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.411 15:39:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.411 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.671 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.671 [2024-11-06 15:39:49.298405] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500032eb80:1 started. 00:35:21.671 [2024-11-06 15:39:49.305682] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500032eb80 was disconnected and freed. delete nvme_qpair. 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.931 [2024-11-06 15:39:49.381068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:21.931 [2024-11-06 15:39:49.381611] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:21.931 [2024-11-06 15:39:49.381642] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.931 [2024-11-06 15:39:49.508341] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:21.931 15:39:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:35:22.190 [2024-11-06 15:39:49.612301] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:35:22.190 [2024-11-06 15:39:49.612355] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:22.190 [2024-11-06 15:39:49.612369] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:22.190 [2024-11-06 15:39:49.612377] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.129 [2024-11-06 15:39:50.637248] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:23.129 [2024-11-06 15:39:50.637288] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:23.129 [2024-11-06 15:39:50.643569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.129 [2024-11-06 15:39:50.643603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.129 [2024-11-06 15:39:50.643619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.129 [2024-11-06 15:39:50.643629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.129 [2024-11-06 15:39:50.643640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.129 [2024-11-06 15:39:50.643650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.129 [2024-11-06 15:39:50.643661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.129 [2024-11-06 15:39:50.643671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.129 [2024-11-06 15:39:50.643680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:23.129 [2024-11-06 15:39:50.653569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.129 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.129 [2024-11-06 15:39:50.663610] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.129 [2024-11-06 15:39:50.663638] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.129 [2024-11-06 15:39:50.663651] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.129 [2024-11-06 15:39:50.663664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.129 [2024-11-06 15:39:50.663704] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.129 [2024-11-06 15:39:50.663901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.129 [2024-11-06 15:39:50.663924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.129 [2024-11-06 15:39:50.663937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.129 [2024-11-06 15:39:50.663955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.663970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.663980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.130 [2024-11-06 15:39:50.663995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.130 [2024-11-06 15:39:50.664006] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.130 [2024-11-06 15:39:50.664014] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.130 [2024-11-06 15:39:50.664021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.130 [2024-11-06 15:39:50.673738] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.130 [2024-11-06 15:39:50.673761] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.130 [2024-11-06 15:39:50.673768] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.673775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.130 [2024-11-06 15:39:50.673798] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.673995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.130 [2024-11-06 15:39:50.674015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.130 [2024-11-06 15:39:50.674027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.130 [2024-11-06 15:39:50.674042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.674056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.674065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.130 [2024-11-06 15:39:50.674075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.130 [2024-11-06 15:39:50.674084] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.130 [2024-11-06 15:39:50.674091] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.130 [2024-11-06 15:39:50.674097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.130 [2024-11-06 15:39:50.683834] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.130 [2024-11-06 15:39:50.683864] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.130 [2024-11-06 15:39:50.683872] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.683878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.130 [2024-11-06 15:39:50.683901] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.684003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.130 [2024-11-06 15:39:50.684021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.130 [2024-11-06 15:39:50.684032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.130 [2024-11-06 15:39:50.684048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.684062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.684071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.130 [2024-11-06 15:39:50.684080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.130 [2024-11-06 15:39:50.684088] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.130 [2024-11-06 15:39:50.684095] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.130 [2024-11-06 15:39:50.684101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.130 [2024-11-06 15:39:50.693936] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.130 [2024-11-06 15:39:50.693961] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.130 [2024-11-06 15:39:50.693968] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.693974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.130 [2024-11-06 15:39:50.694002] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.694177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.130 [2024-11-06 15:39:50.694197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.130 [2024-11-06 15:39:50.694214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:23.130 [2024-11-06 15:39:50.694229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.694248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.694257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.130 [2024-11-06 15:39:50.694266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.130 [2024-11-06 15:39:50.694274] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.130 [2024-11-06 15:39:50.694281] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.130 [2024-11-06 15:39:50.694287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.130 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:23.130 [2024-11-06 15:39:50.704038] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.130 [2024-11-06 15:39:50.704065] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.130 [2024-11-06 15:39:50.704073] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.704079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.130 [2024-11-06 15:39:50.704103] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.704221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.130 [2024-11-06 15:39:50.704243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.130 [2024-11-06 15:39:50.704256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.130 [2024-11-06 15:39:50.704273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.704288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.704297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.130 [2024-11-06 15:39:50.704307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.130 [2024-11-06 15:39:50.704317] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.130 [2024-11-06 15:39:50.704325] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.130 [2024-11-06 15:39:50.704332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.130 [2024-11-06 15:39:50.714140] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.130 [2024-11-06 15:39:50.714164] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.130 [2024-11-06 15:39:50.714176] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.714183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.130 [2024-11-06 15:39:50.714219] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.130 [2024-11-06 15:39:50.714424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.130 [2024-11-06 15:39:50.714444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.130 [2024-11-06 15:39:50.714457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.130 [2024-11-06 15:39:50.714475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.130 [2024-11-06 15:39:50.714501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.130 [2024-11-06 15:39:50.714513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.131 [2024-11-06 15:39:50.714524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.131 [2024-11-06 15:39:50.714533] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.131 [2024-11-06 15:39:50.714541] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.131 [2024-11-06 15:39:50.714548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.131 [2024-11-06 15:39:50.724255] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:23.131 [2024-11-06 15:39:50.724279] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:23.131 [2024-11-06 15:39:50.724288] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:23.131 [2024-11-06 15:39:50.724295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:23.131 [2024-11-06 15:39:50.724318] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:23.131 [2024-11-06 15:39:50.724556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.131 [2024-11-06 15:39:50.724576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=10.0.0.2, port=4420 00:35:23.131 [2024-11-06 15:39:50.724589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:35:23.131 [2024-11-06 15:39:50.724606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032df00 (9): Bad file descriptor 00:35:23.131 [2024-11-06 15:39:50.724659] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:23.131 [2024-11-06 15:39:50.724683] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:23.131 [2024-11-06 15:39:50.724726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:23.131 [2024-11-06 15:39:50.724740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:23.131 [2024-11-06 15:39:50.724753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:23.131 [2024-11-06 15:39:50.724763] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:23.131 [2024-11-06 15:39:50.724771] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:23.131 [2024-11-06 15:39:50.724782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:23.131 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.391 15:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.391 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:23.391 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:23.391 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:35:23.391 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:35:23.392 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:23.392 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.392 15:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.770 [2024-11-06 15:39:52.023743] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:24.770 [2024-11-06 15:39:52.023769] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:24.770 [2024-11-06 15:39:52.023799] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:24.770 [2024-11-06 15:39:52.110073] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:24.770 [2024-11-06 15:39:52.378576] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:35:24.770 [2024-11-06 15:39:52.379578] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000330700:1 started. 00:35:24.770 [2024-11-06 15:39:52.381531] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:24.770 [2024-11-06 15:39:52.381565] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.770 [2024-11-06 15:39:52.383858] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000330700 was disconnected and freed. delete nvme_qpair. 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.770 request: 00:35:24.770 { 00:35:24.770 "name": "nvme", 00:35:24.770 "trtype": "tcp", 00:35:24.770 "traddr": "10.0.0.2", 00:35:24.770 "adrfam": "ipv4", 00:35:24.770 "trsvcid": "8009", 00:35:24.770 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:24.770 "wait_for_attach": true, 00:35:24.770 "method": "bdev_nvme_start_discovery", 00:35:24.770 "req_id": 1 00:35:24.770 } 00:35:24.770 Got JSON-RPC error response 00:35:24.770 response: 00:35:24.770 { 00:35:24.770 "code": -17, 00:35:24.770 "message": "File exists" 00:35:24.770 } 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:24.770 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:25.030 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.031 request: 00:35:25.031 { 00:35:25.031 "name": "nvme_second", 00:35:25.031 "trtype": "tcp", 00:35:25.031 "traddr": "10.0.0.2", 00:35:25.031 "adrfam": "ipv4", 00:35:25.031 "trsvcid": "8009", 00:35:25.031 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:25.031 "wait_for_attach": true, 00:35:25.031 "method": "bdev_nvme_start_discovery", 00:35:25.031 "req_id": 1 00:35:25.031 } 00:35:25.031 Got JSON-RPC error response 00:35:25.031 response: 00:35:25.031 { 00:35:25.031 "code": -17, 00:35:25.031 "message": "File exists" 00:35:25.031 } 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.031 15:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.408 [2024-11-06 15:39:53.621137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:26.408 [2024-11-06 15:39:53.621173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330c00 with addr=10.0.0.2, port=8010 00:35:26.408 [2024-11-06 15:39:53.621248] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:26.408 [2024-11-06 15:39:53.621260] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:26.409 [2024-11-06 15:39:53.621271] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:27.344 [2024-11-06 15:39:54.623532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.344 [2024-11-06 15:39:54.623563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000330e80 with addr=10.0.0.2, port=8010 00:35:27.344 [2024-11-06 15:39:54.623609] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:27.344 [2024-11-06 15:39:54.623618] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:27.344 [2024-11-06 15:39:54.623627] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:28.281 [2024-11-06 15:39:55.625688] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:28.281 request: 00:35:28.281 { 00:35:28.281 "name": "nvme_second", 00:35:28.281 "trtype": "tcp", 00:35:28.281 "traddr": "10.0.0.2", 00:35:28.281 "adrfam": "ipv4", 00:35:28.281 "trsvcid": "8010", 00:35:28.281 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:28.281 "wait_for_attach": false, 00:35:28.281 "attach_timeout_ms": 3000, 00:35:28.281 "method": "bdev_nvme_start_discovery", 00:35:28.281 "req_id": 1 00:35:28.281 } 00:35:28.281 Got JSON-RPC error response 00:35:28.281 response: 00:35:28.281 { 00:35:28.281 "code": -110, 00:35:28.281 "message": "Connection timed out" 00:35:28.281 } 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4050566 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.281 rmmod nvme_tcp 00:35:28.281 rmmod nvme_fabrics 00:35:28.281 rmmod nvme_keyring 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4050444 ']' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4050444 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 4050444 ']' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 4050444 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4050444 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4050444' 00:35:28.281 killing process with pid 4050444 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 4050444 00:35:28.281 15:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 4050444 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.659 15:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:31.564 00:35:31.564 real 0m19.631s 00:35:31.564 user 0m24.673s 00:35:31.564 sys 0m6.082s 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:31.564 ************************************ 00:35:31.564 END TEST nvmf_host_discovery 00:35:31.564 ************************************ 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:31.564 15:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.564 ************************************ 00:35:31.564 START TEST nvmf_host_multipath_status 00:35:31.564 ************************************ 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:31.564 * Looking for test storage... 00:35:31.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.564 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.824 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:31.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.825 --rc genhtml_branch_coverage=1 00:35:31.825 --rc genhtml_function_coverage=1 00:35:31.825 --rc genhtml_legend=1 00:35:31.825 --rc geninfo_all_blocks=1 00:35:31.825 --rc geninfo_unexecuted_blocks=1 00:35:31.825 00:35:31.825 ' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:31.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.825 --rc genhtml_branch_coverage=1 00:35:31.825 --rc genhtml_function_coverage=1 00:35:31.825 --rc genhtml_legend=1 00:35:31.825 --rc geninfo_all_blocks=1 00:35:31.825 --rc geninfo_unexecuted_blocks=1 00:35:31.825 00:35:31.825 ' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:31.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.825 --rc genhtml_branch_coverage=1 00:35:31.825 --rc genhtml_function_coverage=1 00:35:31.825 --rc genhtml_legend=1 00:35:31.825 --rc geninfo_all_blocks=1 00:35:31.825 --rc geninfo_unexecuted_blocks=1 00:35:31.825 00:35:31.825 ' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:31.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.825 --rc genhtml_branch_coverage=1 00:35:31.825 --rc genhtml_function_coverage=1 00:35:31.825 --rc genhtml_legend=1 00:35:31.825 --rc geninfo_all_blocks=1 00:35:31.825 --rc geninfo_unexecuted_blocks=1 00:35:31.825 00:35:31.825 ' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.825 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.826 15:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:37.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:37.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:37.252 Found net devices under 0000:86:00.0: cvl_0_0 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.252 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:37.253 Found net devices under 0000:86:00.1: cvl_0_1 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.253 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.512 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.512 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.512 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.512 15:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.512 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.512 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.512 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.512 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:35:37.512 00:35:37.512 --- 10.0.0.2 ping statistics --- 00:35:37.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.512 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:35:37.512 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:35:37.512 00:35:37.512 --- 10.0.0.1 ping statistics --- 00:35:37.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.512 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4055796 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4055796 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4055796 ']' 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:37.513 15:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:37.772 [2024-11-06 15:40:05.214548] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:35:37.772 [2024-11-06 15:40:05.214637] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.772 [2024-11-06 15:40:05.345974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:38.031 [2024-11-06 15:40:05.454219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.031 [2024-11-06 15:40:05.454265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.031 [2024-11-06 15:40:05.454275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.031 [2024-11-06 15:40:05.454285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.031 [2024-11-06 15:40:05.454292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.031 [2024-11-06 15:40:05.456276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.031 [2024-11-06 15:40:05.456298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4055796 00:35:38.600 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:38.600 [2024-11-06 15:40:06.223090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.860 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:39.120 Malloc0 00:35:39.120 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:39.120 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:39.514 15:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.514 [2024-11-06 15:40:07.098585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.514 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:39.773 [2024-11-06 15:40:07.287079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4056264 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4056264 /var/tmp/bdevperf.sock 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 4056264 ']' 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:39.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:39.773 15:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:40.710 15:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:40.710 15:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:35:40.710 15:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:40.969 15:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:41.228 Nvme0n1 00:35:41.228 15:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:41.796 Nvme0n1 00:35:41.796 15:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:41.796 15:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:43.699 15:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:43.699 15:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:43.959 15:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:44.218 15:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:45.155 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:45.155 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:45.155 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.155 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:45.414 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.414 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:45.414 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.414 15:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.674 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:45.932 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:45.932 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:45.932 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:45.932 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:46.191 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.191 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:46.191 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:46.191 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:46.449 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:46.449 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:46.449 15:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:46.708 15:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:46.708 15:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.087 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.345 15:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:48.605 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.605 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:48.605 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.605 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:48.864 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.864 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:48.864 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.864 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:49.123 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.123 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:49.123 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:49.382 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:49.382 15:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:50.765 15:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:50.765 15:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:50.765 15:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.765 15:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:50.765 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.765 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:50.765 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.765 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.024 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:51.283 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.283 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:51.283 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.283 15:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:51.542 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.542 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:51.542 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.542 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:51.801 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.801 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:51.801 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:52.059 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:52.059 15:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.435 15:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.694 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:53.953 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.953 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:53.953 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.953 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:54.212 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:54.212 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:54.212 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:54.212 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:54.473 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:54.473 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:54.473 15:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:54.733 15:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:54.733 15:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.113 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:56.372 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.372 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:56.372 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.372 15:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:56.631 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.631 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:56.631 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.631 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:56.890 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:57.149 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:57.407 15:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:58.343 15:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:58.343 15:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:58.343 15:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.343 15:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:58.602 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:58.602 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:58.602 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.602 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:58.860 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.861 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:59.119 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.119 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:59.119 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.119 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:59.379 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:59.379 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:59.379 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:59.379 15:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:59.638 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:59.638 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:59.897 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:59.897 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:59.897 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:00.156 15:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:01.534 15:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:01.534 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:01.534 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:01.534 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:01.534 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:01.793 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:01.793 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:01.793 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:01.793 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.051 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:02.051 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:02.051 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.051 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:02.309 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:02.309 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:02.309 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.309 15:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:02.567 15:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:02.567 15:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:02.567 15:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:02.826 15:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:03.084 15:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:04.020 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:04.020 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:04.020 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:04.020 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.279 15:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:04.538 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:04.538 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:04.538 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.538 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:04.796 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:04.796 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:04.796 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.796 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:05.055 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.055 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:05.055 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.055 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:05.314 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.314 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:05.314 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:05.314 15:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:05.572 15:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.950 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:07.209 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.209 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:07.209 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:07.209 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.468 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.468 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:07.468 15:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.468 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:07.727 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.727 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:07.727 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.727 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:07.986 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:07.986 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:07.986 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:08.245 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:08.245 15:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:09.181 15:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:09.182 15:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:09.182 15:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.182 15:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:09.441 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.441 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:09.441 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.441 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:09.700 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:09.700 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:09.700 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.700 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:09.959 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.959 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:09.959 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:09.959 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:10.219 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.219 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:10.219 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.219 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:10.478 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.478 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:10.478 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.478 15:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4056264 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4056264 ']' 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4056264 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:10.478 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4056264 00:36:10.738 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:36:10.738 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:36:10.738 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4056264' 00:36:10.738 killing process with pid 4056264 00:36:10.738 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4056264 00:36:10.738 15:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4056264 00:36:10.738 { 00:36:10.738 "results": [ 00:36:10.738 { 00:36:10.738 "job": "Nvme0n1", 00:36:10.738 "core_mask": "0x4", 00:36:10.738 "workload": "verify", 00:36:10.738 "status": "terminated", 00:36:10.738 "verify_range": { 00:36:10.738 "start": 0, 00:36:10.738 "length": 16384 00:36:10.738 }, 00:36:10.738 "queue_depth": 128, 00:36:10.738 "io_size": 4096, 00:36:10.738 "runtime": 28.831497, 00:36:10.738 "iops": 9277.24980773631, 00:36:10.738 "mibps": 36.239257061469964, 00:36:10.738 "io_failed": 0, 00:36:10.738 "io_timeout": 0, 00:36:10.738 "avg_latency_us": 13755.800778213774, 00:36:10.738 "min_latency_us": 354.9866666666667, 00:36:10.738 "max_latency_us": 3083812.083809524 00:36:10.738 } 00:36:10.738 ], 00:36:10.738 "core_count": 1 00:36:10.738 } 00:36:11.704 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4056264 00:36:11.704 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:11.704 [2024-11-06 15:40:07.385643] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:11.704 [2024-11-06 15:40:07.385739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056264 ] 00:36:11.704 [2024-11-06 15:40:07.511429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.704 [2024-11-06 15:40:07.620458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.704 Running I/O for 90 seconds... 00:36:11.704 9801.00 IOPS, 38.29 MiB/s [2024-11-06T14:40:39.342Z] 9893.00 IOPS, 38.64 MiB/s [2024-11-06T14:40:39.342Z] 9948.67 IOPS, 38.86 MiB/s [2024-11-06T14:40:39.342Z] 9964.75 IOPS, 38.92 MiB/s [2024-11-06T14:40:39.342Z] 9963.60 IOPS, 38.92 MiB/s [2024-11-06T14:40:39.342Z] 9977.83 IOPS, 38.98 MiB/s [2024-11-06T14:40:39.342Z] 9975.00 IOPS, 38.96 MiB/s [2024-11-06T14:40:39.342Z] 9960.50 IOPS, 38.91 MiB/s [2024-11-06T14:40:39.342Z] 9979.00 IOPS, 38.98 MiB/s [2024-11-06T14:40:39.342Z] 9985.50 IOPS, 39.01 MiB/s [2024-11-06T14:40:39.342Z] 9990.36 IOPS, 39.02 MiB/s [2024-11-06T14:40:39.342Z] 9988.58 IOPS, 39.02 MiB/s [2024-11-06T14:40:39.342Z] [2024-11-06 15:40:22.093386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.704 [2024-11-06 15:40:22.093724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.704 [2024-11-06 15:40:22.093893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.704 [2024-11-06 15:40:22.093910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.093920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.093937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.093947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.093964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.093975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.093992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.094972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.094983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.095001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.095011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.095028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.095038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.095055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.095065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.095342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.705 [2024-11-06 15:40:22.095386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.705 [2024-11-06 15:40:22.095397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.095563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.706 [2024-11-06 15:40:22.096838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.096981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.096998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.097009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.706 [2024-11-06 15:40:22.097027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.706 [2024-11-06 15:40:22.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.097967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.098358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.098376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.098396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.098407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.098431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.098441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.707 [2024-11-06 15:40:22.098458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.707 [2024-11-06 15:40:22.098469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.708 [2024-11-06 15:40:22.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.708 [2024-11-06 15:40:22.098524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.708 [2024-11-06 15:40:22.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.708 [2024-11-06 15:40:22.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.708 [2024-11-06 15:40:22.098857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.098983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.098993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.708 [2024-11-06 15:40:22.099456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.708 [2024-11-06 15:40:22.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.099894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.099905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.709 [2024-11-06 15:40:22.100819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.709 [2024-11-06 15:40:22.100829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.710 [2024-11-06 15:40:22.100856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.710 [2024-11-06 15:40:22.100883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.710 [2024-11-06 15:40:22.100910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.100941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.100971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.100988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.100997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.109784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.109826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.109865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.109901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.109936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.710 [2024-11-06 15:40:22.109972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.109993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.710 [2024-11-06 15:40:22.110598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.710 [2024-11-06 15:40:22.110611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.110966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.110980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.111414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.111427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.711 [2024-11-06 15:40:22.112639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.711 [2024-11-06 15:40:22.112879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.711 [2024-11-06 15:40:22.112892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.112916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.112929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.112953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.112967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.112990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.712 [2024-11-06 15:40:22.113006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.113967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.113990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.712 [2024-11-06 15:40:22.114289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.712 [2024-11-06 15:40:22.114303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.114325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.114339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.114363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.114379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.713 [2024-11-06 15:40:22.115776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.115814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.115852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.115888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.115925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.115986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.713 [2024-11-06 15:40:22.116232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.713 [2024-11-06 15:40:22.116255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.714 [2024-11-06 15:40:22.116379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.116967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.116989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.714 [2024-11-06 15:40:22.117528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.714 [2024-11-06 15:40:22.117550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.117772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.117786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.118965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.715 [2024-11-06 15:40:22.119353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.715 [2024-11-06 15:40:22.119685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.715 [2024-11-06 15:40:22.119707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.119973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.119996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.120591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.120602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.716 [2024-11-06 15:40:22.121469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.716 [2024-11-06 15:40:22.121486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.121666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.121974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.121991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.717 [2024-11-06 15:40:22.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.717 [2024-11-06 15:40:22.122516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.717 [2024-11-06 15:40:22.122533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.122978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.123973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.123984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.124012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.124039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.124050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.718 [2024-11-06 15:40:22.124067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.718 [2024-11-06 15:40:22.124082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.719 [2024-11-06 15:40:22.124110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.719 [2024-11-06 15:40:22.124138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.719 [2024-11-06 15:40:22.124442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.124980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.124991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.125009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.125019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.125036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.125047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.125063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.125074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.125091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.719 [2024-11-06 15:40:22.125102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.719 [2024-11-06 15:40:22.125121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.125449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.720 [2024-11-06 15:40:22.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.720 [2024-11-06 15:40:22.126711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.720 [2024-11-06 15:40:22.126729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.126963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.126980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.721 [2024-11-06 15:40:22.126991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.721 [2024-11-06 15:40:22.127757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.721 [2024-11-06 15:40:22.127775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.127982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.127999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.128975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.128993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.722 [2024-11-06 15:40:22.129265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.722 [2024-11-06 15:40:22.129350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.722 [2024-11-06 15:40:22.129368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.129982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.129999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.723 [2024-11-06 15:40:22.130197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.723 [2024-11-06 15:40:22.130212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.130971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.130982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.724 [2024-11-06 15:40:22.131806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.724 [2024-11-06 15:40:22.131836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.724 [2024-11-06 15:40:22.131853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.131864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.131882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.131892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.131909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.131920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.131937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.131947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.131964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.131975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.131992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.725 [2024-11-06 15:40:22.132806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.725 [2024-11-06 15:40:22.132824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.132838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.132857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.132869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.133887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.133916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.133946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.133991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.726 [2024-11-06 15:40:22.134172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.726 [2024-11-06 15:40:22.134472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.726 [2024-11-06 15:40:22.134482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.134988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.134997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.135973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.135989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.136000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.727 [2024-11-06 15:40:22.136018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.727 [2024-11-06 15:40:22.136028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.728 [2024-11-06 15:40:22.136681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.728 [2024-11-06 15:40:22.136922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.728 [2024-11-06 15:40:22.136939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.136950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.136967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.136977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.136995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.137641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.137652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.729 [2024-11-06 15:40:22.138543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.729 [2024-11-06 15:40:22.138554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.138951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.730 [2024-11-06 15:40:22.138978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.138996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.730 [2024-11-06 15:40:22.139507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.730 [2024-11-06 15:40:22.139520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.139872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.139883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.141010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.141021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.141038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.731 [2024-11-06 15:40:22.141078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.141095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.731 [2024-11-06 15:40:22.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.731 [2024-11-06 15:40:22.141130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.731 [2024-11-06 15:40:22.141143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.732 [2024-11-06 15:40:22.141548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.141977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.141995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.142006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.142025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.142035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.142054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.142065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.142081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.142092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.142109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.732 [2024-11-06 15:40:22.142119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.732 [2024-11-06 15:40:22.142136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.142500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.733 [2024-11-06 15:40:22.143561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.733 [2024-11-06 15:40:22.143590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.733 [2024-11-06 15:40:22.143621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.733 [2024-11-06 15:40:22.143649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.733 [2024-11-06 15:40:22.143676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.733 [2024-11-06 15:40:22.143693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.733 [2024-11-06 15:40:22.143705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.734 [2024-11-06 15:40:22.143844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.143983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.143993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.144750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.145283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.145301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.734 [2024-11-06 15:40:22.145322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.734 [2024-11-06 15:40:22.145333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.145942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.145971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.145989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.735 [2024-11-06 15:40:22.146403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.735 [2024-11-06 15:40:22.146436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.735 [2024-11-06 15:40:22.146456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.146972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.146991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.147379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.736 [2024-11-06 15:40:22.147390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.736 [2024-11-06 15:40:22.148029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.737 [2024-11-06 15:40:22.148790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.148976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.148990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.149007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.149018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.149037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.149047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.737 [2024-11-06 15:40:22.149064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.737 [2024-11-06 15:40:22.149075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.149634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.149644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.738 [2024-11-06 15:40:22.150446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.738 [2024-11-06 15:40:22.150457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.150862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.150890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.150921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.150949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.150976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.150994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.739 [2024-11-06 15:40:22.151320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.739 [2024-11-06 15:40:22.151383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.739 [2024-11-06 15:40:22.151401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.151980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.151998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.152975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.152991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.153002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.153019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.740 [2024-11-06 15:40:22.153030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.740 [2024-11-06 15:40:22.153047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.741 [2024-11-06 15:40:22.153611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.153981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.153997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.154009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.154026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.154037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.741 [2024-11-06 15:40:22.154053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.741 [2024-11-06 15:40:22.154064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.154967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.154985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.742 [2024-11-06 15:40:22.155582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.742 [2024-11-06 15:40:22.155595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.743 [2024-11-06 15:40:22.155623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.743 [2024-11-06 15:40:22.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.743 [2024-11-06 15:40:22.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.743 [2024-11-06 15:40:22.155708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.155984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.155994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.743 [2024-11-06 15:40:22.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.743 [2024-11-06 15:40:22.156485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.743 [2024-11-06 15:40:22.156502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.156699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.156711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.160885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.160896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.744 [2024-11-06 15:40:22.161677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.744 [2024-11-06 15:40:22.161688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.745 [2024-11-06 15:40:22.161719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.745 [2024-11-06 15:40:22.161750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.161970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.161991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.745 [2024-11-06 15:40:22.162066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.745 [2024-11-06 15:40:22.162933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.745 [2024-11-06 15:40:22.162954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:22.162965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:22.163091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:22.163104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.746 9814.00 IOPS, 38.34 MiB/s [2024-11-06T14:40:39.384Z] 9113.00 IOPS, 35.60 MiB/s [2024-11-06T14:40:39.384Z] 8505.47 IOPS, 33.22 MiB/s [2024-11-06T14:40:39.384Z] 8068.81 IOPS, 31.52 MiB/s [2024-11-06T14:40:39.384Z] 8177.88 IOPS, 31.94 MiB/s [2024-11-06T14:40:39.384Z] 8269.50 IOPS, 32.30 MiB/s [2024-11-06T14:40:39.384Z] 8446.00 IOPS, 32.99 MiB/s [2024-11-06T14:40:39.384Z] 8640.75 IOPS, 33.75 MiB/s [2024-11-06T14:40:39.384Z] 8802.38 IOPS, 34.38 MiB/s [2024-11-06T14:40:39.384Z] 8847.77 IOPS, 34.56 MiB/s [2024-11-06T14:40:39.384Z] 8890.52 IOPS, 34.73 MiB/s [2024-11-06T14:40:39.384Z] 8951.12 IOPS, 34.97 MiB/s [2024-11-06T14:40:39.384Z] 9072.04 IOPS, 35.44 MiB/s [2024-11-06T14:40:39.384Z] 9184.81 IOPS, 35.88 MiB/s [2024-11-06T14:40:39.384Z] [2024-11-06 15:40:35.796099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.746 [2024-11-06 15:40:35.796415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.796953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.746 [2024-11-06 15:40:35.796980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.796997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.797025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.797052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.797079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.797107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.797135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.797145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.798198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.746 [2024-11-06 15:40:35.798231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.746 [2024-11-06 15:40:35.798255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.747 [2024-11-06 15:40:35.798824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.798975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.799006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.799016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.747 [2024-11-06 15:40:35.799032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.747 [2024-11-06 15:40:35.799043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.747 9246.48 IOPS, 36.12 MiB/s [2024-11-06T14:40:39.385Z] 9272.07 IOPS, 36.22 MiB/s [2024-11-06T14:40:39.385Z] Received shutdown signal, test time was about 28.832156 seconds 00:36:11.747 00:36:11.747 Latency(us) 00:36:11.747 [2024-11-06T14:40:39.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.747 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:11.747 Verification LBA range: start 0x0 length 0x4000 00:36:11.747 Nvme0n1 : 28.83 9277.25 36.24 0.00 0.00 13755.80 354.99 3083812.08 00:36:11.747 [2024-11-06T14:40:39.385Z] =================================================================================================================== 00:36:11.747 [2024-11-06T14:40:39.385Z] Total : 9277.25 36.24 0.00 0.00 13755.80 354.99 3083812.08 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.747 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.748 rmmod nvme_tcp 00:36:11.748 rmmod nvme_fabrics 00:36:12.007 rmmod nvme_keyring 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4055796 ']' 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4055796 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 4055796 ']' 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 4055796 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4055796 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4055796' 00:36:12.007 killing process with pid 4055796 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 4055796 00:36:12.007 15:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 4055796 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.383 15:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.289 00:36:15.289 real 0m43.730s 00:36:15.289 user 1m57.497s 00:36:15.289 sys 0m11.632s 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:15.289 ************************************ 00:36:15.289 END TEST nvmf_host_multipath_status 00:36:15.289 ************************************ 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.289 ************************************ 00:36:15.289 START TEST nvmf_discovery_remove_ifc 00:36:15.289 ************************************ 00:36:15.289 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:15.289 * Looking for test storage... 00:36:15.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.549 15:40:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.549 --rc genhtml_branch_coverage=1 00:36:15.549 --rc genhtml_function_coverage=1 00:36:15.549 --rc genhtml_legend=1 00:36:15.549 --rc geninfo_all_blocks=1 00:36:15.549 --rc geninfo_unexecuted_blocks=1 00:36:15.549 00:36:15.549 ' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.549 --rc genhtml_branch_coverage=1 00:36:15.549 --rc genhtml_function_coverage=1 00:36:15.549 --rc genhtml_legend=1 00:36:15.549 --rc geninfo_all_blocks=1 00:36:15.549 --rc geninfo_unexecuted_blocks=1 00:36:15.549 00:36:15.549 ' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.549 --rc genhtml_branch_coverage=1 00:36:15.549 --rc genhtml_function_coverage=1 00:36:15.549 --rc genhtml_legend=1 00:36:15.549 --rc geninfo_all_blocks=1 00:36:15.549 --rc geninfo_unexecuted_blocks=1 00:36:15.549 00:36:15.549 ' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:15.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.549 --rc genhtml_branch_coverage=1 00:36:15.549 --rc genhtml_function_coverage=1 00:36:15.549 --rc genhtml_legend=1 00:36:15.549 --rc geninfo_all_blocks=1 00:36:15.549 --rc geninfo_unexecuted_blocks=1 00:36:15.549 00:36:15.549 ' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.549 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.550 15:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.202 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:22.203 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:22.203 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:22.203 Found net devices under 0000:86:00.0: cvl_0_0 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:22.203 Found net devices under 0000:86:00.1: cvl_0_1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:36:22.203 00:36:22.203 --- 10.0.0.2 ping statistics --- 00:36:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.203 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:36:22.203 00:36:22.203 --- 10.0.0.1 ping statistics --- 00:36:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.203 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4065195 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4065195 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4065195 ']' 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:22.203 15:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.203 [2024-11-06 15:40:49.009432] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:22.203 [2024-11-06 15:40:49.009524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.203 [2024-11-06 15:40:49.138671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.203 [2024-11-06 15:40:49.242889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.203 [2024-11-06 15:40:49.242933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.203 [2024-11-06 15:40:49.242944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.203 [2024-11-06 15:40:49.242954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.203 [2024-11-06 15:40:49.242961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.203 [2024-11-06 15:40:49.244248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.203 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:22.203 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:36:22.203 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.203 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:22.203 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.461 [2024-11-06 15:40:49.853129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.461 [2024-11-06 15:40:49.861280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:22.461 null0 00:36:22.461 [2024-11-06 15:40:49.893287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4065284 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4065284 /tmp/host.sock 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 4065284 ']' 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:22.461 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:22.461 15:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.462 [2024-11-06 15:40:49.989296] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:36:22.462 [2024-11-06 15:40:49.989383] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4065284 ] 00:36:22.719 [2024-11-06 15:40:50.120338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.719 [2024-11-06 15:40:50.223360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.287 15:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.546 15:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.546 15:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:23.546 15:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.546 15:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.924 [2024-11-06 15:40:52.184395] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:24.924 [2024-11-06 15:40:52.184429] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:24.924 [2024-11-06 15:40:52.184455] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:24.924 [2024-11-06 15:40:52.270729] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:24.924 [2024-11-06 15:40:52.452903] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:36:24.924 [2024-11-06 15:40:52.454156] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500032e900:1 started. 00:36:24.924 [2024-11-06 15:40:52.455767] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:24.924 [2024-11-06 15:40:52.455823] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:24.924 [2024-11-06 15:40:52.455877] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:24.924 [2024-11-06 15:40:52.455900] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:24.924 [2024-11-06 15:40:52.455926] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.924 [2024-11-06 15:40:52.462652] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500032e900 was disconnected and freed. delete nvme_qpair. 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:24.924 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:25.183 15:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:26.119 15:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:27.496 15:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:28.433 15:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:29.370 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:29.370 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:29.370 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:29.370 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.370 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:29.371 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:29.371 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:29.371 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.371 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:29.371 15:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.309 [2024-11-06 15:40:57.896798] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:30.309 [2024-11-06 15:40:57.896855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:30.309 [2024-11-06 15:40:57.896872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.309 [2024-11-06 15:40:57.896886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:30.309 [2024-11-06 15:40:57.896899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.309 [2024-11-06 15:40:57.896909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:30.309 [2024-11-06 15:40:57.896918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.309 [2024-11-06 15:40:57.896928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:30.309 [2024-11-06 15:40:57.896937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.309 [2024-11-06 15:40:57.896947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:30.309 [2024-11-06 15:40:57.896957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:30.309 [2024-11-06 15:40:57.896966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:36:30.309 [2024-11-06 15:40:57.906817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:30.309 15:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:30.309 [2024-11-06 15:40:57.916856] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:30.309 [2024-11-06 15:40:57.916882] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:30.309 [2024-11-06 15:40:57.916890] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:30.309 [2024-11-06 15:40:57.916898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:30.309 [2024-11-06 15:40:57.916933] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:31.689 [2024-11-06 15:40:58.955216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:31.689 [2024-11-06 15:40:58.955257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=10.0.0.2, port=4420 00:36:31.689 [2024-11-06 15:40:58.955274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:36:31.689 [2024-11-06 15:40:58.955299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:36:31.689 [2024-11-06 15:40:58.955700] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:36:31.689 [2024-11-06 15:40:58.955731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:31.689 [2024-11-06 15:40:58.955750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:31.689 [2024-11-06 15:40:58.955762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:31.689 [2024-11-06 15:40:58.955772] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:31.689 [2024-11-06 15:40:58.955780] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:31.689 [2024-11-06 15:40:58.955787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:31.689 [2024-11-06 15:40:58.955796] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:31.689 [2024-11-06 15:40:58.955803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:31.689 15:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:32.624 [2024-11-06 15:40:59.958280] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:32.624 [2024-11-06 15:40:59.958310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:32.624 [2024-11-06 15:40:59.958325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:32.624 [2024-11-06 15:40:59.958335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:32.624 [2024-11-06 15:40:59.958346] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:36:32.624 [2024-11-06 15:40:59.958356] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:32.624 [2024-11-06 15:40:59.958363] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:32.624 [2024-11-06 15:40:59.958370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:32.624 [2024-11-06 15:40:59.958411] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:32.624 [2024-11-06 15:40:59.958440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.624 [2024-11-06 15:40:59.958455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.624 [2024-11-06 15:40:59.958469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.624 [2024-11-06 15:40:59.958481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.624 [2024-11-06 15:40:59.958492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.624 [2024-11-06 15:40:59.958501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.624 [2024-11-06 15:40:59.958512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.624 [2024-11-06 15:40:59.958521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.624 [2024-11-06 15:40:59.958531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.624 [2024-11-06 15:40:59.958541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.624 [2024-11-06 15:40:59.958555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:36:32.624 [2024-11-06 15:40:59.958743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032da00 (9): Bad file descriptor 00:36:32.624 [2024-11-06 15:40:59.959761] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:32.624 [2024-11-06 15:40:59.959783] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.624 15:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:32.624 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:32.625 15:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:33.561 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.820 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:33.820 15:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:34.387 [2024-11-06 15:41:02.018748] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:34.387 [2024-11-06 15:41:02.018774] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:34.387 [2024-11-06 15:41:02.018803] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:34.646 [2024-11-06 15:41:02.145210] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:34.646 15:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:34.904 [2024-11-06 15:41:02.369498] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:36:34.904 [2024-11-06 15:41:02.370547] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500032fd00:1 started. 00:36:34.904 [2024-11-06 15:41:02.372156] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:34.904 [2024-11-06 15:41:02.372199] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:34.904 [2024-11-06 15:41:02.372256] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:34.904 [2024-11-06 15:41:02.372274] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:34.904 [2024-11-06 15:41:02.372285] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:34.904 [2024-11-06 15:41:02.378606] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500032fd00 was disconnected and freed. delete nvme_qpair. 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4065284 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4065284 ']' 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4065284 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4065284 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4065284' 00:36:35.842 killing process with pid 4065284 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4065284 00:36:35.842 15:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4065284 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:36.780 rmmod nvme_tcp 00:36:36.780 rmmod nvme_fabrics 00:36:36.780 rmmod nvme_keyring 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4065195 ']' 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4065195 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 4065195 ']' 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 4065195 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4065195 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4065195' 00:36:36.780 killing process with pid 4065195 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 4065195 00:36:36.780 15:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 4065195 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.159 15:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.067 00:36:40.067 real 0m24.623s 00:36:40.067 user 0m31.671s 00:36:40.067 sys 0m6.084s 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:40.067 ************************************ 00:36:40.067 END TEST nvmf_discovery_remove_ifc 00:36:40.067 ************************************ 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.067 ************************************ 00:36:40.067 START TEST nvmf_identify_kernel_target 00:36:40.067 ************************************ 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:40.067 * Looking for test storage... 00:36:40.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:40.067 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.068 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.328 --rc genhtml_branch_coverage=1 00:36:40.328 --rc genhtml_function_coverage=1 00:36:40.328 --rc genhtml_legend=1 00:36:40.328 --rc geninfo_all_blocks=1 00:36:40.328 --rc geninfo_unexecuted_blocks=1 00:36:40.328 00:36:40.328 ' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.328 --rc genhtml_branch_coverage=1 00:36:40.328 --rc genhtml_function_coverage=1 00:36:40.328 --rc genhtml_legend=1 00:36:40.328 --rc geninfo_all_blocks=1 00:36:40.328 --rc geninfo_unexecuted_blocks=1 00:36:40.328 00:36:40.328 ' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.328 --rc genhtml_branch_coverage=1 00:36:40.328 --rc genhtml_function_coverage=1 00:36:40.328 --rc genhtml_legend=1 00:36:40.328 --rc geninfo_all_blocks=1 00:36:40.328 --rc geninfo_unexecuted_blocks=1 00:36:40.328 00:36:40.328 ' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.328 --rc genhtml_branch_coverage=1 00:36:40.328 --rc genhtml_function_coverage=1 00:36:40.328 --rc genhtml_legend=1 00:36:40.328 --rc geninfo_all_blocks=1 00:36:40.328 --rc geninfo_unexecuted_blocks=1 00:36:40.328 00:36:40.328 ' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.328 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:40.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.329 15:41:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:46.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.915 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:46.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:46.916 Found net devices under 0000:86:00.0: cvl_0_0 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:46.916 Found net devices under 0000:86:00.1: cvl_0_1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:36:46.916 00:36:46.916 --- 10.0.0.2 ping statistics --- 00:36:46.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.916 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:36:46.916 00:36:46.916 --- 10.0.0.1 ping statistics --- 00:36:46.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.916 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:46.916 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:46.917 15:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:48.825 Waiting for block devices as requested 00:36:48.825 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:49.085 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:49.085 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:49.344 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:49.344 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:49.344 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:49.344 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:49.604 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:49.604 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:49.604 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:49.863 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:49.863 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:49.863 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:49.863 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:50.123 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:50.123 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:50.123 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:50.382 No valid GPT data, bailing 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:50.382 00:36:50.382 Discovery Log Number of Records 2, Generation counter 2 00:36:50.382 =====Discovery Log Entry 0====== 00:36:50.382 trtype: tcp 00:36:50.382 adrfam: ipv4 00:36:50.382 subtype: current discovery subsystem 00:36:50.382 treq: not specified, sq flow control disable supported 00:36:50.382 portid: 1 00:36:50.382 trsvcid: 4420 00:36:50.382 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:50.382 traddr: 10.0.0.1 00:36:50.382 eflags: none 00:36:50.382 sectype: none 00:36:50.382 =====Discovery Log Entry 1====== 00:36:50.382 trtype: tcp 00:36:50.382 adrfam: ipv4 00:36:50.382 subtype: nvme subsystem 00:36:50.382 treq: not specified, sq flow control disable supported 00:36:50.382 portid: 1 00:36:50.382 trsvcid: 4420 00:36:50.382 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:50.382 traddr: 10.0.0.1 00:36:50.382 eflags: none 00:36:50.382 sectype: none 00:36:50.382 15:41:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:50.382 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:50.642 ===================================================== 00:36:50.642 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:50.642 ===================================================== 00:36:50.642 Controller Capabilities/Features 00:36:50.642 ================================ 00:36:50.642 Vendor ID: 0000 00:36:50.642 Subsystem Vendor ID: 0000 00:36:50.642 Serial Number: b84f2a9a44b4d5f9a579 00:36:50.642 Model Number: Linux 00:36:50.642 Firmware Version: 6.8.9-20 00:36:50.642 Recommended Arb Burst: 0 00:36:50.642 IEEE OUI Identifier: 00 00 00 00:36:50.642 Multi-path I/O 00:36:50.642 May have multiple subsystem ports: No 00:36:50.642 May have multiple controllers: No 00:36:50.642 Associated with SR-IOV VF: No 00:36:50.642 Max Data Transfer Size: Unlimited 00:36:50.642 Max Number of Namespaces: 0 00:36:50.642 Max Number of I/O Queues: 1024 00:36:50.642 NVMe Specification Version (VS): 1.3 00:36:50.642 NVMe Specification Version (Identify): 1.3 00:36:50.642 Maximum Queue Entries: 1024 00:36:50.642 Contiguous Queues Required: No 00:36:50.642 Arbitration Mechanisms Supported 00:36:50.642 Weighted Round Robin: Not Supported 00:36:50.642 Vendor Specific: Not Supported 00:36:50.642 Reset Timeout: 7500 ms 00:36:50.642 Doorbell Stride: 4 bytes 00:36:50.642 NVM Subsystem Reset: Not Supported 00:36:50.642 Command Sets Supported 00:36:50.642 NVM Command Set: Supported 00:36:50.642 Boot Partition: Not Supported 00:36:50.642 Memory Page Size Minimum: 4096 bytes 00:36:50.642 Memory Page Size Maximum: 4096 bytes 00:36:50.642 Persistent Memory Region: Not Supported 00:36:50.642 Optional Asynchronous Events Supported 00:36:50.642 Namespace Attribute Notices: Not Supported 00:36:50.642 Firmware Activation Notices: Not Supported 00:36:50.642 ANA Change Notices: Not Supported 00:36:50.642 PLE Aggregate Log Change Notices: Not Supported 00:36:50.642 LBA Status Info Alert Notices: Not Supported 00:36:50.642 EGE Aggregate Log Change Notices: Not Supported 00:36:50.642 Normal NVM Subsystem Shutdown event: Not Supported 00:36:50.642 Zone Descriptor Change Notices: Not Supported 00:36:50.642 Discovery Log Change Notices: Supported 00:36:50.642 Controller Attributes 00:36:50.642 128-bit Host Identifier: Not Supported 00:36:50.642 Non-Operational Permissive Mode: Not Supported 00:36:50.642 NVM Sets: Not Supported 00:36:50.642 Read Recovery Levels: Not Supported 00:36:50.642 Endurance Groups: Not Supported 00:36:50.642 Predictable Latency Mode: Not Supported 00:36:50.643 Traffic Based Keep ALive: Not Supported 00:36:50.643 Namespace Granularity: Not Supported 00:36:50.643 SQ Associations: Not Supported 00:36:50.643 UUID List: Not Supported 00:36:50.643 Multi-Domain Subsystem: Not Supported 00:36:50.643 Fixed Capacity Management: Not Supported 00:36:50.643 Variable Capacity Management: Not Supported 00:36:50.643 Delete Endurance Group: Not Supported 00:36:50.643 Delete NVM Set: Not Supported 00:36:50.643 Extended LBA Formats Supported: Not Supported 00:36:50.643 Flexible Data Placement Supported: Not Supported 00:36:50.643 00:36:50.643 Controller Memory Buffer Support 00:36:50.643 ================================ 00:36:50.643 Supported: No 00:36:50.643 00:36:50.643 Persistent Memory Region Support 00:36:50.643 ================================ 00:36:50.643 Supported: No 00:36:50.643 00:36:50.643 Admin Command Set Attributes 00:36:50.643 ============================ 00:36:50.643 Security Send/Receive: Not Supported 00:36:50.643 Format NVM: Not Supported 00:36:50.643 Firmware Activate/Download: Not Supported 00:36:50.643 Namespace Management: Not Supported 00:36:50.643 Device Self-Test: Not Supported 00:36:50.643 Directives: Not Supported 00:36:50.643 NVMe-MI: Not Supported 00:36:50.643 Virtualization Management: Not Supported 00:36:50.643 Doorbell Buffer Config: Not Supported 00:36:50.643 Get LBA Status Capability: Not Supported 00:36:50.643 Command & Feature Lockdown Capability: Not Supported 00:36:50.643 Abort Command Limit: 1 00:36:50.643 Async Event Request Limit: 1 00:36:50.643 Number of Firmware Slots: N/A 00:36:50.643 Firmware Slot 1 Read-Only: N/A 00:36:50.643 Firmware Activation Without Reset: N/A 00:36:50.643 Multiple Update Detection Support: N/A 00:36:50.643 Firmware Update Granularity: No Information Provided 00:36:50.643 Per-Namespace SMART Log: No 00:36:50.643 Asymmetric Namespace Access Log Page: Not Supported 00:36:50.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:50.643 Command Effects Log Page: Not Supported 00:36:50.643 Get Log Page Extended Data: Supported 00:36:50.643 Telemetry Log Pages: Not Supported 00:36:50.643 Persistent Event Log Pages: Not Supported 00:36:50.643 Supported Log Pages Log Page: May Support 00:36:50.643 Commands Supported & Effects Log Page: Not Supported 00:36:50.643 Feature Identifiers & Effects Log Page:May Support 00:36:50.643 NVMe-MI Commands & Effects Log Page: May Support 00:36:50.643 Data Area 4 for Telemetry Log: Not Supported 00:36:50.643 Error Log Page Entries Supported: 1 00:36:50.643 Keep Alive: Not Supported 00:36:50.643 00:36:50.643 NVM Command Set Attributes 00:36:50.643 ========================== 00:36:50.643 Submission Queue Entry Size 00:36:50.643 Max: 1 00:36:50.643 Min: 1 00:36:50.643 Completion Queue Entry Size 00:36:50.643 Max: 1 00:36:50.643 Min: 1 00:36:50.643 Number of Namespaces: 0 00:36:50.643 Compare Command: Not Supported 00:36:50.643 Write Uncorrectable Command: Not Supported 00:36:50.643 Dataset Management Command: Not Supported 00:36:50.643 Write Zeroes Command: Not Supported 00:36:50.643 Set Features Save Field: Not Supported 00:36:50.643 Reservations: Not Supported 00:36:50.643 Timestamp: Not Supported 00:36:50.643 Copy: Not Supported 00:36:50.643 Volatile Write Cache: Not Present 00:36:50.643 Atomic Write Unit (Normal): 1 00:36:50.643 Atomic Write Unit (PFail): 1 00:36:50.643 Atomic Compare & Write Unit: 1 00:36:50.643 Fused Compare & Write: Not Supported 00:36:50.643 Scatter-Gather List 00:36:50.643 SGL Command Set: Supported 00:36:50.643 SGL Keyed: Not Supported 00:36:50.643 SGL Bit Bucket Descriptor: Not Supported 00:36:50.643 SGL Metadata Pointer: Not Supported 00:36:50.643 Oversized SGL: Not Supported 00:36:50.643 SGL Metadata Address: Not Supported 00:36:50.643 SGL Offset: Supported 00:36:50.643 Transport SGL Data Block: Not Supported 00:36:50.643 Replay Protected Memory Block: Not Supported 00:36:50.643 00:36:50.643 Firmware Slot Information 00:36:50.643 ========================= 00:36:50.643 Active slot: 0 00:36:50.643 00:36:50.643 00:36:50.643 Error Log 00:36:50.643 ========= 00:36:50.643 00:36:50.643 Active Namespaces 00:36:50.643 ================= 00:36:50.643 Discovery Log Page 00:36:50.643 ================== 00:36:50.643 Generation Counter: 2 00:36:50.643 Number of Records: 2 00:36:50.643 Record Format: 0 00:36:50.643 00:36:50.643 Discovery Log Entry 0 00:36:50.643 ---------------------- 00:36:50.643 Transport Type: 3 (TCP) 00:36:50.643 Address Family: 1 (IPv4) 00:36:50.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:50.643 Entry Flags: 00:36:50.643 Duplicate Returned Information: 0 00:36:50.643 Explicit Persistent Connection Support for Discovery: 0 00:36:50.643 Transport Requirements: 00:36:50.643 Secure Channel: Not Specified 00:36:50.643 Port ID: 1 (0x0001) 00:36:50.643 Controller ID: 65535 (0xffff) 00:36:50.643 Admin Max SQ Size: 32 00:36:50.643 Transport Service Identifier: 4420 00:36:50.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:50.643 Transport Address: 10.0.0.1 00:36:50.643 Discovery Log Entry 1 00:36:50.643 ---------------------- 00:36:50.643 Transport Type: 3 (TCP) 00:36:50.643 Address Family: 1 (IPv4) 00:36:50.643 Subsystem Type: 2 (NVM Subsystem) 00:36:50.643 Entry Flags: 00:36:50.643 Duplicate Returned Information: 0 00:36:50.643 Explicit Persistent Connection Support for Discovery: 0 00:36:50.643 Transport Requirements: 00:36:50.643 Secure Channel: Not Specified 00:36:50.643 Port ID: 1 (0x0001) 00:36:50.643 Controller ID: 65535 (0xffff) 00:36:50.643 Admin Max SQ Size: 32 00:36:50.643 Transport Service Identifier: 4420 00:36:50.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:50.643 Transport Address: 10.0.0.1 00:36:50.643 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.904 get_feature(0x01) failed 00:36:50.904 get_feature(0x02) failed 00:36:50.904 get_feature(0x04) failed 00:36:50.904 ===================================================== 00:36:50.904 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.904 ===================================================== 00:36:50.904 Controller Capabilities/Features 00:36:50.904 ================================ 00:36:50.904 Vendor ID: 0000 00:36:50.904 Subsystem Vendor ID: 0000 00:36:50.904 Serial Number: 71ca286421161e8652b6 00:36:50.904 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:50.904 Firmware Version: 6.8.9-20 00:36:50.904 Recommended Arb Burst: 6 00:36:50.904 IEEE OUI Identifier: 00 00 00 00:36:50.904 Multi-path I/O 00:36:50.904 May have multiple subsystem ports: Yes 00:36:50.904 May have multiple controllers: Yes 00:36:50.904 Associated with SR-IOV VF: No 00:36:50.904 Max Data Transfer Size: Unlimited 00:36:50.904 Max Number of Namespaces: 1024 00:36:50.904 Max Number of I/O Queues: 128 00:36:50.904 NVMe Specification Version (VS): 1.3 00:36:50.904 NVMe Specification Version (Identify): 1.3 00:36:50.904 Maximum Queue Entries: 1024 00:36:50.904 Contiguous Queues Required: No 00:36:50.904 Arbitration Mechanisms Supported 00:36:50.904 Weighted Round Robin: Not Supported 00:36:50.904 Vendor Specific: Not Supported 00:36:50.904 Reset Timeout: 7500 ms 00:36:50.904 Doorbell Stride: 4 bytes 00:36:50.904 NVM Subsystem Reset: Not Supported 00:36:50.904 Command Sets Supported 00:36:50.904 NVM Command Set: Supported 00:36:50.904 Boot Partition: Not Supported 00:36:50.904 Memory Page Size Minimum: 4096 bytes 00:36:50.904 Memory Page Size Maximum: 4096 bytes 00:36:50.904 Persistent Memory Region: Not Supported 00:36:50.904 Optional Asynchronous Events Supported 00:36:50.904 Namespace Attribute Notices: Supported 00:36:50.904 Firmware Activation Notices: Not Supported 00:36:50.904 ANA Change Notices: Supported 00:36:50.904 PLE Aggregate Log Change Notices: Not Supported 00:36:50.904 LBA Status Info Alert Notices: Not Supported 00:36:50.904 EGE Aggregate Log Change Notices: Not Supported 00:36:50.904 Normal NVM Subsystem Shutdown event: Not Supported 00:36:50.904 Zone Descriptor Change Notices: Not Supported 00:36:50.904 Discovery Log Change Notices: Not Supported 00:36:50.904 Controller Attributes 00:36:50.904 128-bit Host Identifier: Supported 00:36:50.904 Non-Operational Permissive Mode: Not Supported 00:36:50.904 NVM Sets: Not Supported 00:36:50.904 Read Recovery Levels: Not Supported 00:36:50.904 Endurance Groups: Not Supported 00:36:50.904 Predictable Latency Mode: Not Supported 00:36:50.904 Traffic Based Keep ALive: Supported 00:36:50.904 Namespace Granularity: Not Supported 00:36:50.904 SQ Associations: Not Supported 00:36:50.904 UUID List: Not Supported 00:36:50.904 Multi-Domain Subsystem: Not Supported 00:36:50.904 Fixed Capacity Management: Not Supported 00:36:50.904 Variable Capacity Management: Not Supported 00:36:50.904 Delete Endurance Group: Not Supported 00:36:50.904 Delete NVM Set: Not Supported 00:36:50.904 Extended LBA Formats Supported: Not Supported 00:36:50.904 Flexible Data Placement Supported: Not Supported 00:36:50.904 00:36:50.904 Controller Memory Buffer Support 00:36:50.904 ================================ 00:36:50.904 Supported: No 00:36:50.904 00:36:50.904 Persistent Memory Region Support 00:36:50.904 ================================ 00:36:50.904 Supported: No 00:36:50.904 00:36:50.904 Admin Command Set Attributes 00:36:50.904 ============================ 00:36:50.904 Security Send/Receive: Not Supported 00:36:50.904 Format NVM: Not Supported 00:36:50.904 Firmware Activate/Download: Not Supported 00:36:50.904 Namespace Management: Not Supported 00:36:50.904 Device Self-Test: Not Supported 00:36:50.904 Directives: Not Supported 00:36:50.904 NVMe-MI: Not Supported 00:36:50.904 Virtualization Management: Not Supported 00:36:50.904 Doorbell Buffer Config: Not Supported 00:36:50.904 Get LBA Status Capability: Not Supported 00:36:50.904 Command & Feature Lockdown Capability: Not Supported 00:36:50.904 Abort Command Limit: 4 00:36:50.904 Async Event Request Limit: 4 00:36:50.904 Number of Firmware Slots: N/A 00:36:50.904 Firmware Slot 1 Read-Only: N/A 00:36:50.904 Firmware Activation Without Reset: N/A 00:36:50.904 Multiple Update Detection Support: N/A 00:36:50.904 Firmware Update Granularity: No Information Provided 00:36:50.904 Per-Namespace SMART Log: Yes 00:36:50.904 Asymmetric Namespace Access Log Page: Supported 00:36:50.904 ANA Transition Time : 10 sec 00:36:50.904 00:36:50.904 Asymmetric Namespace Access Capabilities 00:36:50.904 ANA Optimized State : Supported 00:36:50.904 ANA Non-Optimized State : Supported 00:36:50.904 ANA Inaccessible State : Supported 00:36:50.904 ANA Persistent Loss State : Supported 00:36:50.904 ANA Change State : Supported 00:36:50.904 ANAGRPID is not changed : No 00:36:50.905 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:50.905 00:36:50.905 ANA Group Identifier Maximum : 128 00:36:50.905 Number of ANA Group Identifiers : 128 00:36:50.905 Max Number of Allowed Namespaces : 1024 00:36:50.905 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:50.905 Command Effects Log Page: Supported 00:36:50.905 Get Log Page Extended Data: Supported 00:36:50.905 Telemetry Log Pages: Not Supported 00:36:50.905 Persistent Event Log Pages: Not Supported 00:36:50.905 Supported Log Pages Log Page: May Support 00:36:50.905 Commands Supported & Effects Log Page: Not Supported 00:36:50.905 Feature Identifiers & Effects Log Page:May Support 00:36:50.905 NVMe-MI Commands & Effects Log Page: May Support 00:36:50.905 Data Area 4 for Telemetry Log: Not Supported 00:36:50.905 Error Log Page Entries Supported: 128 00:36:50.905 Keep Alive: Supported 00:36:50.905 Keep Alive Granularity: 1000 ms 00:36:50.905 00:36:50.905 NVM Command Set Attributes 00:36:50.905 ========================== 00:36:50.905 Submission Queue Entry Size 00:36:50.905 Max: 64 00:36:50.905 Min: 64 00:36:50.905 Completion Queue Entry Size 00:36:50.905 Max: 16 00:36:50.905 Min: 16 00:36:50.905 Number of Namespaces: 1024 00:36:50.905 Compare Command: Not Supported 00:36:50.905 Write Uncorrectable Command: Not Supported 00:36:50.905 Dataset Management Command: Supported 00:36:50.905 Write Zeroes Command: Supported 00:36:50.905 Set Features Save Field: Not Supported 00:36:50.905 Reservations: Not Supported 00:36:50.905 Timestamp: Not Supported 00:36:50.905 Copy: Not Supported 00:36:50.905 Volatile Write Cache: Present 00:36:50.905 Atomic Write Unit (Normal): 1 00:36:50.905 Atomic Write Unit (PFail): 1 00:36:50.905 Atomic Compare & Write Unit: 1 00:36:50.905 Fused Compare & Write: Not Supported 00:36:50.905 Scatter-Gather List 00:36:50.905 SGL Command Set: Supported 00:36:50.905 SGL Keyed: Not Supported 00:36:50.905 SGL Bit Bucket Descriptor: Not Supported 00:36:50.905 SGL Metadata Pointer: Not Supported 00:36:50.905 Oversized SGL: Not Supported 00:36:50.905 SGL Metadata Address: Not Supported 00:36:50.905 SGL Offset: Supported 00:36:50.905 Transport SGL Data Block: Not Supported 00:36:50.905 Replay Protected Memory Block: Not Supported 00:36:50.905 00:36:50.905 Firmware Slot Information 00:36:50.905 ========================= 00:36:50.905 Active slot: 0 00:36:50.905 00:36:50.905 Asymmetric Namespace Access 00:36:50.905 =========================== 00:36:50.905 Change Count : 0 00:36:50.905 Number of ANA Group Descriptors : 1 00:36:50.905 ANA Group Descriptor : 0 00:36:50.905 ANA Group ID : 1 00:36:50.905 Number of NSID Values : 1 00:36:50.905 Change Count : 0 00:36:50.905 ANA State : 1 00:36:50.905 Namespace Identifier : 1 00:36:50.905 00:36:50.905 Commands Supported and Effects 00:36:50.905 ============================== 00:36:50.905 Admin Commands 00:36:50.905 -------------- 00:36:50.905 Get Log Page (02h): Supported 00:36:50.905 Identify (06h): Supported 00:36:50.905 Abort (08h): Supported 00:36:50.905 Set Features (09h): Supported 00:36:50.905 Get Features (0Ah): Supported 00:36:50.905 Asynchronous Event Request (0Ch): Supported 00:36:50.905 Keep Alive (18h): Supported 00:36:50.905 I/O Commands 00:36:50.905 ------------ 00:36:50.905 Flush (00h): Supported 00:36:50.905 Write (01h): Supported LBA-Change 00:36:50.905 Read (02h): Supported 00:36:50.905 Write Zeroes (08h): Supported LBA-Change 00:36:50.905 Dataset Management (09h): Supported 00:36:50.905 00:36:50.905 Error Log 00:36:50.905 ========= 00:36:50.905 Entry: 0 00:36:50.905 Error Count: 0x3 00:36:50.905 Submission Queue Id: 0x0 00:36:50.905 Command Id: 0x5 00:36:50.905 Phase Bit: 0 00:36:50.905 Status Code: 0x2 00:36:50.905 Status Code Type: 0x0 00:36:50.905 Do Not Retry: 1 00:36:50.905 Error Location: 0x28 00:36:50.905 LBA: 0x0 00:36:50.905 Namespace: 0x0 00:36:50.905 Vendor Log Page: 0x0 00:36:50.905 ----------- 00:36:50.905 Entry: 1 00:36:50.905 Error Count: 0x2 00:36:50.905 Submission Queue Id: 0x0 00:36:50.905 Command Id: 0x5 00:36:50.905 Phase Bit: 0 00:36:50.905 Status Code: 0x2 00:36:50.905 Status Code Type: 0x0 00:36:50.905 Do Not Retry: 1 00:36:50.905 Error Location: 0x28 00:36:50.905 LBA: 0x0 00:36:50.905 Namespace: 0x0 00:36:50.905 Vendor Log Page: 0x0 00:36:50.905 ----------- 00:36:50.905 Entry: 2 00:36:50.905 Error Count: 0x1 00:36:50.905 Submission Queue Id: 0x0 00:36:50.905 Command Id: 0x4 00:36:50.905 Phase Bit: 0 00:36:50.905 Status Code: 0x2 00:36:50.905 Status Code Type: 0x0 00:36:50.905 Do Not Retry: 1 00:36:50.905 Error Location: 0x28 00:36:50.905 LBA: 0x0 00:36:50.905 Namespace: 0x0 00:36:50.905 Vendor Log Page: 0x0 00:36:50.905 00:36:50.905 Number of Queues 00:36:50.905 ================ 00:36:50.905 Number of I/O Submission Queues: 128 00:36:50.905 Number of I/O Completion Queues: 128 00:36:50.905 00:36:50.905 ZNS Specific Controller Data 00:36:50.905 ============================ 00:36:50.905 Zone Append Size Limit: 0 00:36:50.905 00:36:50.905 00:36:50.905 Active Namespaces 00:36:50.905 ================= 00:36:50.905 get_feature(0x05) failed 00:36:50.905 Namespace ID:1 00:36:50.905 Command Set Identifier: NVM (00h) 00:36:50.905 Deallocate: Supported 00:36:50.905 Deallocated/Unwritten Error: Not Supported 00:36:50.905 Deallocated Read Value: Unknown 00:36:50.905 Deallocate in Write Zeroes: Not Supported 00:36:50.905 Deallocated Guard Field: 0xFFFF 00:36:50.905 Flush: Supported 00:36:50.905 Reservation: Not Supported 00:36:50.905 Namespace Sharing Capabilities: Multiple Controllers 00:36:50.905 Size (in LBAs): 3125627568 (1490GiB) 00:36:50.905 Capacity (in LBAs): 3125627568 (1490GiB) 00:36:50.905 Utilization (in LBAs): 3125627568 (1490GiB) 00:36:50.905 UUID: 219a9a6f-56aa-4351-b00f-36742eec1748 00:36:50.905 Thin Provisioning: Not Supported 00:36:50.905 Per-NS Atomic Units: Yes 00:36:50.905 Atomic Boundary Size (Normal): 0 00:36:50.905 Atomic Boundary Size (PFail): 0 00:36:50.905 Atomic Boundary Offset: 0 00:36:50.905 NGUID/EUI64 Never Reused: No 00:36:50.905 ANA group ID: 1 00:36:50.905 Namespace Write Protected: No 00:36:50.905 Number of LBA Formats: 1 00:36:50.905 Current LBA Format: LBA Format #00 00:36:50.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:50.905 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.905 rmmod nvme_tcp 00:36:50.905 rmmod nvme_fabrics 00:36:50.905 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.906 15:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:52.811 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:53.071 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:53.071 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:53.071 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:53.071 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:53.071 15:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:55.613 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:55.872 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:57.253 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:57.512 00:36:57.512 real 0m17.399s 00:36:57.512 user 0m4.356s 00:36:57.512 sys 0m8.824s 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.512 ************************************ 00:36:57.512 END TEST nvmf_identify_kernel_target 00:36:57.512 ************************************ 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:57.512 15:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.512 ************************************ 00:36:57.512 START TEST nvmf_auth_host 00:36:57.512 ************************************ 00:36:57.513 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:57.513 * Looking for test storage... 00:36:57.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:57.513 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:57.513 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:57.513 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:57.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.773 --rc genhtml_branch_coverage=1 00:36:57.773 --rc genhtml_function_coverage=1 00:36:57.773 --rc genhtml_legend=1 00:36:57.773 --rc geninfo_all_blocks=1 00:36:57.773 --rc geninfo_unexecuted_blocks=1 00:36:57.773 00:36:57.773 ' 00:36:57.773 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:57.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.773 --rc genhtml_branch_coverage=1 00:36:57.773 --rc genhtml_function_coverage=1 00:36:57.773 --rc genhtml_legend=1 00:36:57.773 --rc geninfo_all_blocks=1 00:36:57.773 --rc geninfo_unexecuted_blocks=1 00:36:57.773 00:36:57.773 ' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:57.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.774 --rc genhtml_branch_coverage=1 00:36:57.774 --rc genhtml_function_coverage=1 00:36:57.774 --rc genhtml_legend=1 00:36:57.774 --rc geninfo_all_blocks=1 00:36:57.774 --rc geninfo_unexecuted_blocks=1 00:36:57.774 00:36:57.774 ' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:57.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.774 --rc genhtml_branch_coverage=1 00:36:57.774 --rc genhtml_function_coverage=1 00:36:57.774 --rc genhtml_legend=1 00:36:57.774 --rc geninfo_all_blocks=1 00:36:57.774 --rc geninfo_unexecuted_blocks=1 00:36:57.774 00:36:57.774 ' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:57.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.774 15:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:04.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:04.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.349 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:04.350 Found net devices under 0000:86:00.0: cvl_0_0 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:04.350 Found net devices under 0000:86:00.1: cvl_0_1 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.350 15:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:37:04.350 00:37:04.350 --- 10.0.0.2 ping statistics --- 00:37:04.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.350 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:37:04.350 00:37:04.350 --- 10.0.0.1 ping statistics --- 00:37:04.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.350 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4077728 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4077728 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4077728 ']' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:04.350 15:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a122170fb7ed5f0cd13ba64154cc091 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MaZ 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a122170fb7ed5f0cd13ba64154cc091 0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a122170fb7ed5f0cd13ba64154cc091 0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a122170fb7ed5f0cd13ba64154cc091 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MaZ 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MaZ 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.MaZ 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9b528477472a0ee41444d04d151c2991941501e5005082986ce73d828fb4c97 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xwq 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9b528477472a0ee41444d04d151c2991941501e5005082986ce73d828fb4c97 3 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9b528477472a0ee41444d04d151c2991941501e5005082986ce73d828fb4c97 3 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9b528477472a0ee41444d04d151c2991941501e5005082986ce73d828fb4c97 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xwq 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xwq 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xwq 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=271f02b23c850365da8425278d74a908129a9ef69622e568 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WbF 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 271f02b23c850365da8425278d74a908129a9ef69622e568 0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 271f02b23c850365da8425278d74a908129a9ef69622e568 0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=271f02b23c850365da8425278d74a908129a9ef69622e568 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:04.611 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WbF 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WbF 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.WbF 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:04.871 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a5e3e370b5fd0544abe441723c65f88b941a5c264fb9a68 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.P0a 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a5e3e370b5fd0544abe441723c65f88b941a5c264fb9a68 2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a5e3e370b5fd0544abe441723c65f88b941a5c264fb9a68 2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a5e3e370b5fd0544abe441723c65f88b941a5c264fb9a68 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.P0a 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.P0a 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.P0a 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a77fe123812dfb3c70eb53ba0a55c23c 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VZM 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a77fe123812dfb3c70eb53ba0a55c23c 1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a77fe123812dfb3c70eb53ba0a55c23c 1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a77fe123812dfb3c70eb53ba0a55c23c 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VZM 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VZM 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VZM 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8727adfafcf0a244c455d42c7371f940 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xvS 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8727adfafcf0a244c455d42c7371f940 1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8727adfafcf0a244c455d42c7371f940 1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8727adfafcf0a244c455d42c7371f940 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xvS 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xvS 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xvS 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=500ca5a1dae25ae10e778a441211dcb032b60ef82116998b 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hn3 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 500ca5a1dae25ae10e778a441211dcb032b60ef82116998b 2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 500ca5a1dae25ae10e778a441211dcb032b60ef82116998b 2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=500ca5a1dae25ae10e778a441211dcb032b60ef82116998b 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hn3 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hn3 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hn3 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:37:04.872 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:05.132 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc14e862fe310d47f1bdca220585634e 00:37:05.132 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:37:05.132 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.M6Q 00:37:05.132 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc14e862fe310d47f1bdca220585634e 0 00:37:05.132 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc14e862fe310d47f1bdca220585634e 0 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc14e862fe310d47f1bdca220585634e 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.M6Q 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.M6Q 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.M6Q 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa3e0e83fdcf8042168996e4e20e01800c300db53d166f34f867aefcc7e0f31f 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5Y6 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa3e0e83fdcf8042168996e4e20e01800c300db53d166f34f867aefcc7e0f31f 3 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa3e0e83fdcf8042168996e4e20e01800c300db53d166f34f867aefcc7e0f31f 3 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa3e0e83fdcf8042168996e4e20e01800c300db53d166f34f867aefcc7e0f31f 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5Y6 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5Y6 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.5Y6 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4077728 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 4077728 ']' 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:05.133 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MaZ 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.392 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xwq ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xwq 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.WbF 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.P0a ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P0a 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VZM 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xvS ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xvS 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hn3 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.M6Q ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.M6Q 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5Y6 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:05.393 15:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:07.944 Waiting for block devices as requested 00:37:08.202 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:08.202 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:08.202 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:08.461 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:08.461 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:08.461 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:08.461 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:08.721 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:08.721 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:08.721 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:08.721 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:08.980 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:08.980 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:08.980 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:09.239 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:09.239 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:09.239 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:09.813 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:09.813 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:09.813 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:09.814 No valid GPT data, bailing 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:09.814 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:10.072 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:10.072 00:37:10.072 Discovery Log Number of Records 2, Generation counter 2 00:37:10.072 =====Discovery Log Entry 0====== 00:37:10.072 trtype: tcp 00:37:10.072 adrfam: ipv4 00:37:10.072 subtype: current discovery subsystem 00:37:10.072 treq: not specified, sq flow control disable supported 00:37:10.072 portid: 1 00:37:10.072 trsvcid: 4420 00:37:10.072 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:10.072 traddr: 10.0.0.1 00:37:10.072 eflags: none 00:37:10.072 sectype: none 00:37:10.072 =====Discovery Log Entry 1====== 00:37:10.072 trtype: tcp 00:37:10.072 adrfam: ipv4 00:37:10.072 subtype: nvme subsystem 00:37:10.072 treq: not specified, sq flow control disable supported 00:37:10.072 portid: 1 00:37:10.072 trsvcid: 4420 00:37:10.072 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:10.072 traddr: 10.0.0.1 00:37:10.072 eflags: none 00:37:10.072 sectype: none 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.073 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.332 nvme0n1 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.332 nvme0n1 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.332 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.592 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.592 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.592 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.592 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.592 15:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.592 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.592 nvme0n1 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.593 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 nvme0n1 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.853 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.854 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.854 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.854 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.854 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.854 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.113 nvme0n1 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:11.113 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.114 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.374 nvme0n1 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.374 15:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.634 nvme0n1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.634 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.894 nvme0n1 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.894 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.218 nvme0n1 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.218 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.493 nvme0n1 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.493 15:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.770 nvme0n1 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.770 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.035 nvme0n1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.035 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.294 nvme0n1 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:13.294 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.295 15:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.554 nvme0n1 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.554 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.813 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.073 nvme0n1 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.073 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.332 nvme0n1 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.332 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.333 15:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.901 nvme0n1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.902 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.161 nvme0n1 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:15.161 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.162 15:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.730 nvme0n1 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.730 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.989 nvme0n1 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.989 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.248 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:16.249 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:16.249 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:16.249 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:16.249 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.249 15:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.508 nvme0n1 00:37:16.508 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.508 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.508 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.508 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.509 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.077 nvme0n1 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.077 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.336 15:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.905 nvme0n1 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:17.905 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.906 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.474 nvme0n1 00:37:18.474 15:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:18.474 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.475 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.043 nvme0n1 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.043 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.302 15:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.871 nvme0n1 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.871 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.131 nvme0n1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.131 nvme0n1 00:37:20.131 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.391 nvme0n1 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.391 15:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.391 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.651 nvme0n1 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:20.651 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.911 nvme0n1 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.911 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.171 nvme0n1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.171 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.430 nvme0n1 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.431 15:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.431 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.690 nvme0n1 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.690 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.691 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.951 nvme0n1 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.951 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.209 nvme0n1 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.209 15:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.467 nvme0n1 00:37:22.467 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.467 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.467 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.467 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.467 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:22.468 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.726 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.985 nvme0n1 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.985 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.244 nvme0n1 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.244 15:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.503 nvme0n1 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.504 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.763 nvme0n1 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.763 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.022 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.023 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.282 nvme0n1 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:24.282 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.283 15:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.852 nvme0n1 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.852 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.110 nvme0n1 00:37:25.110 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.110 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.110 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.110 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.110 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.370 15:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.630 nvme0n1 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.630 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.889 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.148 nvme0n1 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.148 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.149 15:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.717 nvme0n1 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.717 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.977 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.546 nvme0n1 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.546 15:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.546 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.115 nvme0n1 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:28.115 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.116 15:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.684 nvme0n1 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:28.684 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.685 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.943 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 nvme0n1 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 nvme0n1 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.525 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:29.784 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.785 nvme0n1 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:29.785 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.044 nvme0n1 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.044 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.045 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.304 nvme0n1 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.304 15:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.564 nvme0n1 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.564 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.824 nvme0n1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.824 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.084 nvme0n1 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.084 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.344 nvme0n1 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.344 15:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.604 nvme0n1 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.604 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.864 nvme0n1 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.864 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.124 nvme0n1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.124 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.383 nvme0n1 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.383 15:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.641 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 nvme0n1 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.900 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.159 nvme0n1 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.159 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.418 nvme0n1 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.418 15:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:33.418 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:33.419 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.677 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.935 nvme0n1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.936 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.503 nvme0n1 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:34.503 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.504 15:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.762 nvme0n1 00:37:34.762 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.762 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.762 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.763 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.021 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.022 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.281 nvme0n1 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.281 15:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.849 nvme0n1 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2ExMjIxNzBmYjdlZDVmMGNkMTNiYTY0MTU0Y2MwOTGn8gMp: 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTliNTI4NDc3NDcyYTBlZTQxNDQ0ZDA0ZDE1MWMyOTkxOTQxNTAxZTUwMDUwODI5ODZjZTczZDgyOGZiNGM5N043ryA=: 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.849 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 nvme0n1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.415 15:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.982 nvme0n1 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.982 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.241 15:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.815 nvme0n1 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.815 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTAwY2E1YTFkYWUyNWFlMTBlNzc4YTQ0MTIxMWRjYjAzMmI2MGVmODIxMTY5OThiuqbOZg==: 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGMxNGU4NjJmZTMxMGQ0N2YxYmRjYTIyMDU4NTYzNGW8R6Or: 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.816 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.385 nvme0n1 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmEzZTBlODNmZGNmODA0MjE2ODk5NmU0ZTIwZTAxODAwYzMwMGRiNTNkMTY2ZjM0Zjg2N2FlZmNjN2UwZjMxZnhbvh8=: 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.385 15:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.953 nvme0n1 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:38.953 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.212 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.213 request: 00:37:39.213 { 00:37:39.213 "name": "nvme0", 00:37:39.213 "trtype": "tcp", 00:37:39.213 "traddr": "10.0.0.1", 00:37:39.213 "adrfam": "ipv4", 00:37:39.213 "trsvcid": "4420", 00:37:39.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.213 "prchk_reftag": false, 00:37:39.213 "prchk_guard": false, 00:37:39.213 "hdgst": false, 00:37:39.213 "ddgst": false, 00:37:39.213 "allow_unrecognized_csi": false, 00:37:39.213 "method": "bdev_nvme_attach_controller", 00:37:39.213 "req_id": 1 00:37:39.213 } 00:37:39.213 Got JSON-RPC error response 00:37:39.213 response: 00:37:39.213 { 00:37:39.213 "code": -5, 00:37:39.213 "message": "Input/output error" 00:37:39.213 } 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.213 request: 00:37:39.213 { 00:37:39.213 "name": "nvme0", 00:37:39.213 "trtype": "tcp", 00:37:39.213 "traddr": "10.0.0.1", 00:37:39.213 "adrfam": "ipv4", 00:37:39.213 "trsvcid": "4420", 00:37:39.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.213 "prchk_reftag": false, 00:37:39.213 "prchk_guard": false, 00:37:39.213 "hdgst": false, 00:37:39.213 "ddgst": false, 00:37:39.213 "dhchap_key": "key2", 00:37:39.213 "allow_unrecognized_csi": false, 00:37:39.213 "method": "bdev_nvme_attach_controller", 00:37:39.213 "req_id": 1 00:37:39.213 } 00:37:39.213 Got JSON-RPC error response 00:37:39.213 response: 00:37:39.213 { 00:37:39.213 "code": -5, 00:37:39.213 "message": "Input/output error" 00:37:39.213 } 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.213 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.214 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.473 request: 00:37:39.473 { 00:37:39.473 "name": "nvme0", 00:37:39.473 "trtype": "tcp", 00:37:39.473 "traddr": "10.0.0.1", 00:37:39.473 "adrfam": "ipv4", 00:37:39.473 "trsvcid": "4420", 00:37:39.473 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:39.473 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:39.473 "prchk_reftag": false, 00:37:39.473 "prchk_guard": false, 00:37:39.473 "hdgst": false, 00:37:39.473 "ddgst": false, 00:37:39.473 "dhchap_key": "key1", 00:37:39.473 "dhchap_ctrlr_key": "ckey2", 00:37:39.473 "allow_unrecognized_csi": false, 00:37:39.473 "method": "bdev_nvme_attach_controller", 00:37:39.473 "req_id": 1 00:37:39.473 } 00:37:39.473 Got JSON-RPC error response 00:37:39.473 response: 00:37:39.473 { 00:37:39.473 "code": -5, 00:37:39.473 "message": "Input/output error" 00:37:39.473 } 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.473 15:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.473 nvme0n1 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:39.473 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.474 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.733 request: 00:37:39.733 { 00:37:39.733 "name": "nvme0", 00:37:39.733 "dhchap_key": "key1", 00:37:39.733 "dhchap_ctrlr_key": "ckey2", 00:37:39.733 "method": "bdev_nvme_set_keys", 00:37:39.733 "req_id": 1 00:37:39.733 } 00:37:39.733 Got JSON-RPC error response 00:37:39.733 response: 00:37:39.733 { 00:37:39.733 "code": -13, 00:37:39.733 "message": "Permission denied" 00:37:39.733 } 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:39.733 15:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:40.670 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.670 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:40.670 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.670 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.929 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.929 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:40.929 15:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:41.867 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjcxZjAyYjIzYzg1MDM2NWRhODQyNTI3OGQ3NGE5MDgxMjlhOWVmNjk2MjJlNTY48fUsbQ==: 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: ]] 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmE1ZTNlMzcwYjVmZDA1NDRhYmU0NDE3MjNjNjVmODhiOTQxYTVjMjY0ZmI5YTY4DURkyA==: 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.868 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.127 nvme0n1 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTc3ZmUxMjM4MTJkZmIzYzcwZWI1M2JhMGE1NWMyM2NgBDhI: 00:37:42.127 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: ]] 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODcyN2FkZmFmY2YwYTI0NGM0NTVkNDJjNzM3MWY5NDC6ZbyG: 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.128 request: 00:37:42.128 { 00:37:42.128 "name": "nvme0", 00:37:42.128 "dhchap_key": "key2", 00:37:42.128 "dhchap_ctrlr_key": "ckey1", 00:37:42.128 "method": "bdev_nvme_set_keys", 00:37:42.128 "req_id": 1 00:37:42.128 } 00:37:42.128 Got JSON-RPC error response 00:37:42.128 response: 00:37:42.128 { 00:37:42.128 "code": -13, 00:37:42.128 "message": "Permission denied" 00:37:42.128 } 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:42.128 15:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:43.064 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.064 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:43.064 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.064 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.064 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:43.323 rmmod nvme_tcp 00:37:43.323 rmmod nvme_fabrics 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4077728 ']' 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4077728 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 4077728 ']' 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 4077728 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4077728 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4077728' 00:37:43.323 killing process with pid 4077728 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 4077728 00:37:43.323 15:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 4077728 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.263 15:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:46.171 15:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:49.462 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:49.462 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:50.841 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:50.841 15:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.MaZ /tmp/spdk.key-null.WbF /tmp/spdk.key-sha256.VZM /tmp/spdk.key-sha384.hn3 /tmp/spdk.key-sha512.5Y6 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:50.841 15:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.461 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:53.461 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:53.461 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:53.461 00:37:53.461 real 0m56.089s 00:37:53.461 user 0m50.235s 00:37:53.461 sys 0m12.905s 00:37:53.461 15:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.721 ************************************ 00:37:53.721 END TEST nvmf_auth_host 00:37:53.721 ************************************ 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.721 ************************************ 00:37:53.721 START TEST nvmf_digest 00:37:53.721 ************************************ 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:53.721 * Looking for test storage... 00:37:53.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.721 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.721 --rc genhtml_branch_coverage=1 00:37:53.721 --rc genhtml_function_coverage=1 00:37:53.721 --rc genhtml_legend=1 00:37:53.722 --rc geninfo_all_blocks=1 00:37:53.722 --rc geninfo_unexecuted_blocks=1 00:37:53.722 00:37:53.722 ' 00:37:53.722 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.722 --rc genhtml_branch_coverage=1 00:37:53.722 --rc genhtml_function_coverage=1 00:37:53.722 --rc genhtml_legend=1 00:37:53.722 --rc geninfo_all_blocks=1 00:37:53.722 --rc geninfo_unexecuted_blocks=1 00:37:53.722 00:37:53.722 ' 00:37:53.722 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.722 --rc genhtml_branch_coverage=1 00:37:53.722 --rc genhtml_function_coverage=1 00:37:53.722 --rc genhtml_legend=1 00:37:53.722 --rc geninfo_all_blocks=1 00:37:53.722 --rc geninfo_unexecuted_blocks=1 00:37:53.722 00:37:53.722 ' 00:37:53.722 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.722 --rc genhtml_branch_coverage=1 00:37:53.722 --rc genhtml_function_coverage=1 00:37:53.722 --rc genhtml_legend=1 00:37:53.722 --rc geninfo_all_blocks=1 00:37:53.722 --rc geninfo_unexecuted_blocks=1 00:37:53.722 00:37:53.722 ' 00:37:53.722 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.722 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:53.982 15:42:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.551 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:00.552 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:00.552 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:00.552 Found net devices under 0000:86:00.0: cvl_0_0 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:00.552 Found net devices under 0000:86:00.1: cvl_0_1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:00.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:38:00.552 00:38:00.552 --- 10.0.0.2 ping statistics --- 00:38:00.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.552 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:00.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:38:00.552 00:38:00.552 --- 10.0.0.1 ping statistics --- 00:38:00.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.552 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:00.552 ************************************ 00:38:00.552 START TEST nvmf_digest_clean 00:38:00.552 ************************************ 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4092208 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4092208 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4092208 ']' 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:00.552 15:42:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:00.552 [2024-11-06 15:42:27.438253] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:00.552 [2024-11-06 15:42:27.438338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.552 [2024-11-06 15:42:27.565210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.552 [2024-11-06 15:42:27.673295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.552 [2024-11-06 15:42:27.673335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.553 [2024-11-06 15:42:27.673346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.553 [2024-11-06 15:42:27.673356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.553 [2024-11-06 15:42:27.673364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.553 [2024-11-06 15:42:27.674818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.812 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:01.071 null0 00:38:01.071 [2024-11-06 15:42:28.603946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.071 [2024-11-06 15:42:28.628163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4092366 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4092366 /var/tmp/bperf.sock 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4092366 ']' 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:01.071 15:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:01.071 [2024-11-06 15:42:28.705486] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:01.071 [2024-11-06 15:42:28.705573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092366 ] 00:38:01.330 [2024-11-06 15:42:28.829473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.330 [2024-11-06 15:42:28.939529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.898 15:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:01.898 15:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:01.898 15:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:01.898 15:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:01.898 15:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:02.467 15:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:02.467 15:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:02.725 nvme0n1 00:38:02.725 15:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:02.725 15:42:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:02.984 Running I/O for 2 seconds... 00:38:04.858 20561.00 IOPS, 80.32 MiB/s [2024-11-06T14:42:32.496Z] 21278.50 IOPS, 83.12 MiB/s 00:38:04.858 Latency(us) 00:38:04.858 [2024-11-06T14:42:32.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.858 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:04.858 nvme0n1 : 2.05 20881.61 81.57 0.00 0.00 6003.95 2652.65 43690.67 00:38:04.858 [2024-11-06T14:42:32.496Z] =================================================================================================================== 00:38:04.858 [2024-11-06T14:42:32.496Z] Total : 20881.61 81.57 0.00 0.00 6003.95 2652.65 43690.67 00:38:04.858 { 00:38:04.858 "results": [ 00:38:04.858 { 00:38:04.858 "job": "nvme0n1", 00:38:04.858 "core_mask": "0x2", 00:38:04.858 "workload": "randread", 00:38:04.858 "status": "finished", 00:38:04.858 "queue_depth": 128, 00:38:04.858 "io_size": 4096, 00:38:04.858 "runtime": 2.045149, 00:38:04.858 "iops": 20881.608137108837, 00:38:04.858 "mibps": 81.5687817855814, 00:38:04.858 "io_failed": 0, 00:38:04.858 "io_timeout": 0, 00:38:04.858 "avg_latency_us": 6003.951735565204, 00:38:04.858 "min_latency_us": 2652.647619047619, 00:38:04.858 "max_latency_us": 43690.666666666664 00:38:04.858 } 00:38:04.858 ], 00:38:04.858 "core_count": 1 00:38:04.858 } 00:38:04.858 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:04.858 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:04.858 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:04.858 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:04.858 | select(.opcode=="crc32c") 00:38:04.858 | "\(.module_name) \(.executed)"' 00:38:04.858 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4092366 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4092366 ']' 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4092366 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4092366 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4092366' 00:38:05.119 killing process with pid 4092366 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4092366 00:38:05.119 Received shutdown signal, test time was about 2.000000 seconds 00:38:05.119 00:38:05.119 Latency(us) 00:38:05.119 [2024-11-06T14:42:32.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:05.119 [2024-11-06T14:42:32.757Z] =================================================================================================================== 00:38:05.119 [2024-11-06T14:42:32.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:05.119 15:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4092366 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4093189 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4093189 /var/tmp/bperf.sock 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4093189 ']' 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:06.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:06.056 15:42:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:06.056 [2024-11-06 15:42:33.636424] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:06.056 [2024-11-06 15:42:33.636509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093189 ] 00:38:06.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:06.056 Zero copy mechanism will not be used. 00:38:06.315 [2024-11-06 15:42:33.761140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.315 [2024-11-06 15:42:33.871168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.882 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:06.882 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:06.882 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:06.882 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:06.883 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:07.450 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:07.450 15:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:07.709 nvme0n1 00:38:07.709 15:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:07.709 15:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:07.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:07.709 Zero copy mechanism will not be used. 00:38:07.709 Running I/O for 2 seconds... 00:38:10.017 5672.00 IOPS, 709.00 MiB/s [2024-11-06T14:42:37.655Z] 5814.00 IOPS, 726.75 MiB/s 00:38:10.017 Latency(us) 00:38:10.017 [2024-11-06T14:42:37.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.017 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:10.017 nvme0n1 : 2.00 5812.58 726.57 0.00 0.00 2749.92 717.78 5523.75 00:38:10.017 [2024-11-06T14:42:37.655Z] =================================================================================================================== 00:38:10.017 [2024-11-06T14:42:37.655Z] Total : 5812.58 726.57 0.00 0.00 2749.92 717.78 5523.75 00:38:10.017 { 00:38:10.017 "results": [ 00:38:10.017 { 00:38:10.017 "job": "nvme0n1", 00:38:10.017 "core_mask": "0x2", 00:38:10.017 "workload": "randread", 00:38:10.017 "status": "finished", 00:38:10.017 "queue_depth": 16, 00:38:10.017 "io_size": 131072, 00:38:10.017 "runtime": 2.003241, 00:38:10.017 "iops": 5812.580712954657, 00:38:10.017 "mibps": 726.5725891193322, 00:38:10.017 "io_failed": 0, 00:38:10.017 "io_timeout": 0, 00:38:10.017 "avg_latency_us": 2749.917819764113, 00:38:10.017 "min_latency_us": 717.7752380952381, 00:38:10.017 "max_latency_us": 5523.748571428571 00:38:10.017 } 00:38:10.017 ], 00:38:10.017 "core_count": 1 00:38:10.017 } 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:10.017 | select(.opcode=="crc32c") 00:38:10.017 | "\(.module_name) \(.executed)"' 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4093189 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4093189 ']' 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4093189 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4093189 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4093189' 00:38:10.017 killing process with pid 4093189 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4093189 00:38:10.017 Received shutdown signal, test time was about 2.000000 seconds 00:38:10.017 00:38:10.017 Latency(us) 00:38:10.017 [2024-11-06T14:42:37.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.017 [2024-11-06T14:42:37.655Z] =================================================================================================================== 00:38:10.017 [2024-11-06T14:42:37.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:10.017 15:42:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4093189 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4093890 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4093890 /var/tmp/bperf.sock 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4093890 ']' 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:10.956 15:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:10.956 [2024-11-06 15:42:38.558474] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:10.956 [2024-11-06 15:42:38.558560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093890 ] 00:38:11.215 [2024-11-06 15:42:38.684275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.215 [2024-11-06 15:42:38.796299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.783 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:11.783 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:11.783 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:11.783 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:11.783 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:12.351 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:12.351 15:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:12.918 nvme0n1 00:38:12.918 15:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:12.918 15:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:12.918 Running I/O for 2 seconds... 00:38:14.791 24323.00 IOPS, 95.01 MiB/s [2024-11-06T14:42:42.688Z] 24356.00 IOPS, 95.14 MiB/s 00:38:15.050 Latency(us) 00:38:15.050 [2024-11-06T14:42:42.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.050 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.050 nvme0n1 : 2.01 24367.87 95.19 0.00 0.00 5245.52 2559.02 13419.28 00:38:15.050 [2024-11-06T14:42:42.688Z] =================================================================================================================== 00:38:15.050 [2024-11-06T14:42:42.688Z] Total : 24367.87 95.19 0.00 0.00 5245.52 2559.02 13419.28 00:38:15.050 { 00:38:15.050 "results": [ 00:38:15.050 { 00:38:15.050 "job": "nvme0n1", 00:38:15.050 "core_mask": "0x2", 00:38:15.050 "workload": "randwrite", 00:38:15.050 "status": "finished", 00:38:15.050 "queue_depth": 128, 00:38:15.050 "io_size": 4096, 00:38:15.050 "runtime": 2.006905, 00:38:15.050 "iops": 24367.869929069886, 00:38:15.050 "mibps": 95.18699191042924, 00:38:15.050 "io_failed": 0, 00:38:15.050 "io_timeout": 0, 00:38:15.050 "avg_latency_us": 5245.516955746146, 00:38:15.050 "min_latency_us": 2559.024761904762, 00:38:15.050 "max_latency_us": 13419.27619047619 00:38:15.050 } 00:38:15.050 ], 00:38:15.050 "core_count": 1 00:38:15.050 } 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:15.050 | select(.opcode=="crc32c") 00:38:15.050 | "\(.module_name) \(.executed)"' 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4093890 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4093890 ']' 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4093890 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:15.050 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4093890 00:38:15.310 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:15.310 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:15.310 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4093890' 00:38:15.310 killing process with pid 4093890 00:38:15.310 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4093890 00:38:15.310 Received shutdown signal, test time was about 2.000000 seconds 00:38:15.310 00:38:15.310 Latency(us) 00:38:15.310 [2024-11-06T14:42:42.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.310 [2024-11-06T14:42:42.948Z] =================================================================================================================== 00:38:15.310 [2024-11-06T14:42:42.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:15.310 15:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4093890 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4094812 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4094812 /var/tmp/bperf.sock 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 4094812 ']' 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:16.247 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:16.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:16.248 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:16.248 15:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:16.248 [2024-11-06 15:42:43.628115] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:16.248 [2024-11-06 15:42:43.628223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4094812 ] 00:38:16.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:16.248 Zero copy mechanism will not be used. 00:38:16.248 [2024-11-06 15:42:43.752180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.248 [2024-11-06 15:42:43.861172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.815 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:16.815 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:38:16.815 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:16.815 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:16.816 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:17.383 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:17.383 15:42:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:17.642 nvme0n1 00:38:17.642 15:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:17.642 15:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:17.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:17.900 Zero copy mechanism will not be used. 00:38:17.900 Running I/O for 2 seconds... 00:38:19.772 5409.00 IOPS, 676.12 MiB/s [2024-11-06T14:42:47.410Z] 5916.00 IOPS, 739.50 MiB/s 00:38:19.772 Latency(us) 00:38:19.772 [2024-11-06T14:42:47.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.772 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:19.772 nvme0n1 : 2.00 5914.25 739.28 0.00 0.00 2700.56 2012.89 6459.98 00:38:19.772 [2024-11-06T14:42:47.410Z] =================================================================================================================== 00:38:19.772 [2024-11-06T14:42:47.410Z] Total : 5914.25 739.28 0.00 0.00 2700.56 2012.89 6459.98 00:38:19.772 { 00:38:19.772 "results": [ 00:38:19.772 { 00:38:19.772 "job": "nvme0n1", 00:38:19.772 "core_mask": "0x2", 00:38:19.772 "workload": "randwrite", 00:38:19.772 "status": "finished", 00:38:19.772 "queue_depth": 16, 00:38:19.772 "io_size": 131072, 00:38:19.772 "runtime": 2.003972, 00:38:19.772 "iops": 5914.254290978118, 00:38:19.772 "mibps": 739.2817863722647, 00:38:19.772 "io_failed": 0, 00:38:19.772 "io_timeout": 0, 00:38:19.772 "avg_latency_us": 2700.556396187905, 00:38:19.772 "min_latency_us": 2012.8914285714286, 00:38:19.772 "max_latency_us": 6459.977142857143 00:38:19.772 } 00:38:19.772 ], 00:38:19.772 "core_count": 1 00:38:19.772 } 00:38:19.772 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:19.772 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:19.772 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:19.772 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:19.772 | select(.opcode=="crc32c") 00:38:19.772 | "\(.module_name) \(.executed)"' 00:38:19.772 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4094812 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4094812 ']' 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4094812 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4094812 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4094812' 00:38:20.031 killing process with pid 4094812 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4094812 00:38:20.031 Received shutdown signal, test time was about 2.000000 seconds 00:38:20.031 00:38:20.031 Latency(us) 00:38:20.031 [2024-11-06T14:42:47.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.031 [2024-11-06T14:42:47.669Z] =================================================================================================================== 00:38:20.031 [2024-11-06T14:42:47.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.031 15:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4094812 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4092208 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 4092208 ']' 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 4092208 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4092208 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4092208' 00:38:20.967 killing process with pid 4092208 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 4092208 00:38:20.967 15:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 4092208 00:38:22.344 00:38:22.344 real 0m22.297s 00:38:22.344 user 0m42.039s 00:38:22.344 sys 0m4.916s 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:22.344 ************************************ 00:38:22.344 END TEST nvmf_digest_clean 00:38:22.344 ************************************ 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:22.344 ************************************ 00:38:22.344 START TEST nvmf_digest_error 00:38:22.344 ************************************ 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4095767 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4095767 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4095767 ']' 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:22.344 15:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:22.344 [2024-11-06 15:42:49.806605] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:22.344 [2024-11-06 15:42:49.806693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:22.344 [2024-11-06 15:42:49.934841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.602 [2024-11-06 15:42:50.058727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:22.602 [2024-11-06 15:42:50.058771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:22.602 [2024-11-06 15:42:50.058783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:22.602 [2024-11-06 15:42:50.058793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:22.602 [2024-11-06 15:42:50.058802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:22.602 [2024-11-06 15:42:50.060217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.169 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.170 [2024-11-06 15:42:50.642227] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.170 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.460 null0 00:38:23.460 [2024-11-06 15:42:50.969740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.460 [2024-11-06 15:42:50.993955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4096017 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4096017 /var/tmp/bperf.sock 00:38:23.460 15:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4096017 ']' 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:23.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:23.460 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:23.460 [2024-11-06 15:42:51.074425] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:23.460 [2024-11-06 15:42:51.074524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096017 ] 00:38:23.768 [2024-11-06 15:42:51.200721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.768 [2024-11-06 15:42:51.314347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.336 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.336 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:24.336 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:24.336 15:42:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.595 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:24.854 nvme0n1 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:24.854 15:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:24.854 Running I/O for 2 seconds... 00:38:24.854 [2024-11-06 15:42:52.460817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:24.854 [2024-11-06 15:42:52.460869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.854 [2024-11-06 15:42:52.460886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:24.854 [2024-11-06 15:42:52.473644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:24.854 [2024-11-06 15:42:52.473680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.854 [2024-11-06 15:42:52.473694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:24.854 [2024-11-06 15:42:52.487916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:24.854 [2024-11-06 15:42:52.487948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:24.854 [2024-11-06 15:42:52.487962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.498297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.498327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.498340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.512311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.512341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.512353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.523671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.523700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.523712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.532865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.532893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.532905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.543725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.543754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.543767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.554996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.555025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.555038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.114 [2024-11-06 15:42:52.565767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.114 [2024-11-06 15:42:52.565796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.114 [2024-11-06 15:42:52.565808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.576759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.576787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.576800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.587919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.587947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.587960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.599930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.599959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.599972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.610080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.610121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.620324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.620352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.620365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.634170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.634200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.634218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.648561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.648593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.661200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.661233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.661246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.671528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.671555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.671567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.683680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.683709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.683722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.694804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.705841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.705870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.705882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.716944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.716972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.716985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.726655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.726683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.726696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.737257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.737285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.115 [2024-11-06 15:42:52.748105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.115 [2024-11-06 15:42:52.748135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.115 [2024-11-06 15:42:52.748148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.762136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.762164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.762177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.775781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.775809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.775823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.790064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.790093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.790105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.802710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.802739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.802752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.812486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.812513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.812525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.826857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.826886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.826899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.840810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.840839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.840852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.855685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.375 [2024-11-06 15:42:52.855714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.375 [2024-11-06 15:42:52.855730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.375 [2024-11-06 15:42:52.868157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.868185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.878265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.878293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.878305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.892295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.892336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.906161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.906188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.906200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.915878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.915905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.915917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.930077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.930118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.943714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.943742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.954243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.954283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.966463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.966491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.966503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.979107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.979136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:52.989132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:52.989159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:52.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.376 [2024-11-06 15:42:53.000917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.376 [2024-11-06 15:42:53.000946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.376 [2024-11-06 15:42:53.000959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.635 [2024-11-06 15:42:53.013300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.635 [2024-11-06 15:42:53.013328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.635 [2024-11-06 15:42:53.013340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.635 [2024-11-06 15:42:53.027164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.635 [2024-11-06 15:42:53.027192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.635 [2024-11-06 15:42:53.027210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.635 [2024-11-06 15:42:53.036581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.036607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.036630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.050692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.050721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.050733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.064850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.064881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.064897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.073886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.073916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.073928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.086648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.086675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.086687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.100809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.100837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.100849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.115049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.115078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.115091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.129565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.129592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.129604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.142379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.142406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.142418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.151846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.151873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.151886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.163349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.163375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.163388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.176291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.176331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.187445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.187485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.196966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.196993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.197006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.209549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.209576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.209589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.223880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.223907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.223919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.238208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.238237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.238249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.252616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.252646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.252659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.636 [2024-11-06 15:42:53.266929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.636 [2024-11-06 15:42:53.266958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.636 [2024-11-06 15:42:53.266971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.279655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.279685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.279702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.288616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.288644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.288657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.303228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.303259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.303271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.317141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.317169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.329606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.329634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.329647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.340538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.340565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.340577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.351258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.351285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.351297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.361754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.361782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.361794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.371925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.371953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.371965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.382557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.382586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.382599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.393098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.393127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.393139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.404703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.404732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.404744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.414744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.414773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.414786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.426275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.436099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.436128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.436140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.449700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.449729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.449741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 21115.00 IOPS, 82.48 MiB/s [2024-11-06T14:42:53.534Z] [2024-11-06 15:42:53.462506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.896 [2024-11-06 15:42:53.462534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-11-06 15:42:53.462547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.896 [2024-11-06 15:42:53.473270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.897 [2024-11-06 15:42:53.473298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.897 [2024-11-06 15:42:53.473315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.897 [2024-11-06 15:42:53.487142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.897 [2024-11-06 15:42:53.487171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.897 [2024-11-06 15:42:53.487183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.897 [2024-11-06 15:42:53.500289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.897 [2024-11-06 15:42:53.500318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.897 [2024-11-06 15:42:53.500331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.897 [2024-11-06 15:42:53.510265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.897 [2024-11-06 15:42:53.510292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.897 [2024-11-06 15:42:53.510303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:25.897 [2024-11-06 15:42:53.525261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:25.897 [2024-11-06 15:42:53.525289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.897 [2024-11-06 15:42:53.525302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.537920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.537948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.537960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.551799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.551828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.551840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.561106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.561142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.561155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.574507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.574536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.574548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.587542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.587571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.587583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.600214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.156 [2024-11-06 15:42:53.600243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.156 [2024-11-06 15:42:53.600255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.156 [2024-11-06 15:42:53.610154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.610183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.610196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.624858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.624885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.624897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.637126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.637154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.637167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.647088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.647117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.647130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.658901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.658928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.670530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.670559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.670571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.681584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.681613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.681629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.694367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.694395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.694407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.707158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.707187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.707199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.716627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.716655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.728696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.728725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.728737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.739438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.739467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.739479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.748940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.748967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.748979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.760807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.760835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.760847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.771970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.771997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.772010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.157 [2024-11-06 15:42:53.782463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.157 [2024-11-06 15:42:53.782490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.157 [2024-11-06 15:42:53.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.794036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.794064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.794076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.804698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.804725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.804737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.817389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.817417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.817429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.827949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.827975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.827988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.839338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.839366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.839378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.851499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.851526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.851538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.863347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.863374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.863387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.873052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.873079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.873095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.884473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.884501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.884514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.894369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.894396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.894408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.905565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.905593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.905605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.915554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.915582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.915594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.926877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.926905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.926919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.937663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.937690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.937702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.949194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.949230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.949242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.958232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.958259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.958271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.970113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.970140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.970153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.980673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.980700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.980712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:53.990469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:53.990496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:53.990509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:54.002619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:54.002647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:54.002659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:54.014296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:54.014323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:54.014336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:54.024516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:54.024545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:54.024557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:54.035559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.417 [2024-11-06 15:42:54.035587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.417 [2024-11-06 15:42:54.035600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.417 [2024-11-06 15:42:54.046738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.418 [2024-11-06 15:42:54.046765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.418 [2024-11-06 15:42:54.046777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.058913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.058940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.069333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.069362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.069375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.083343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.083373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.083385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.093482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.093511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.093523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.103326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.103354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.103365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.118879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.118907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.132990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.133018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.133031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.147111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.147150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.161075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.161102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.161114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.175392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.677 [2024-11-06 15:42:54.175419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.677 [2024-11-06 15:42:54.175431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.677 [2024-11-06 15:42:54.189709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.189737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.189750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.199126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.199153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.199165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.212627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.212653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.212665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.224702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.224729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.224741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.236796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.236824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.236838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.248941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.248969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.248981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.258598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.258624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.258636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.272975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.273004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.273020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.282652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.282679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.282691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.295400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.295428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.678 [2024-11-06 15:42:54.309103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.678 [2024-11-06 15:42:54.309130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.678 [2024-11-06 15:42:54.309143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.319258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.319285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.319298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.333995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.334022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.334034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.343528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.343555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.343568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.357149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.357177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.357190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.370484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.370511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.370524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.380515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.380549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.380561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.393117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.393142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.393155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.402346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.937 [2024-11-06 15:42:54.402372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.937 [2024-11-06 15:42:54.402384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.937 [2024-11-06 15:42:54.415969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.938 [2024-11-06 15:42:54.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.938 [2024-11-06 15:42:54.416010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.938 [2024-11-06 15:42:54.427615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.938 [2024-11-06 15:42:54.427643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.938 [2024-11-06 15:42:54.427655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.938 [2024-11-06 15:42:54.437824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.938 [2024-11-06 15:42:54.437852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.938 [2024-11-06 15:42:54.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.938 [2024-11-06 15:42:54.449092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:26.938 [2024-11-06 15:42:54.449119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.938 [2024-11-06 15:42:54.449131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:26.938 21432.50 IOPS, 83.72 MiB/s 00:38:26.938 Latency(us) 00:38:26.938 [2024-11-06T14:42:54.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:26.938 nvme0n1 : 2.00 21455.08 83.81 0.00 0.00 5960.03 3120.76 21096.35 00:38:26.938 [2024-11-06T14:42:54.576Z] =================================================================================================================== 00:38:26.938 [2024-11-06T14:42:54.576Z] Total : 21455.08 83.81 0.00 0.00 5960.03 3120.76 21096.35 00:38:26.938 { 00:38:26.938 "results": [ 00:38:26.938 { 00:38:26.938 "job": "nvme0n1", 00:38:26.938 "core_mask": "0x2", 00:38:26.938 "workload": "randread", 00:38:26.938 "status": "finished", 00:38:26.938 "queue_depth": 128, 00:38:26.938 "io_size": 4096, 00:38:26.938 "runtime": 2.003861, 00:38:26.938 "iops": 21455.08096619476, 00:38:26.938 "mibps": 83.80891002419828, 00:38:26.938 "io_failed": 0, 00:38:26.938 "io_timeout": 0, 00:38:26.938 "avg_latency_us": 5960.033161721786, 00:38:26.938 "min_latency_us": 3120.7619047619046, 00:38:26.938 "max_latency_us": 21096.350476190477 00:38:26.938 } 00:38:26.938 ], 00:38:26.938 "core_count": 1 00:38:26.938 } 00:38:26.938 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:26.938 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:26.938 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:26.938 | .driver_specific 00:38:26.938 | .nvme_error 00:38:26.938 | .status_code 00:38:26.938 | .command_transient_transport_error' 00:38:26.938 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4096017 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4096017 ']' 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4096017 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4096017 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4096017' 00:38:27.197 killing process with pid 4096017 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4096017 00:38:27.197 Received shutdown signal, test time was about 2.000000 seconds 00:38:27.197 00:38:27.197 Latency(us) 00:38:27.197 [2024-11-06T14:42:54.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.197 [2024-11-06T14:42:54.835Z] =================================================================================================================== 00:38:27.197 [2024-11-06T14:42:54.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.197 15:42:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4096017 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4096721 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4096721 /var/tmp/bperf.sock 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4096721 ']' 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:28.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:28.135 15:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:28.135 [2024-11-06 15:42:55.655474] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:28.135 [2024-11-06 15:42:55.655558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096721 ] 00:38:28.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:28.135 Zero copy mechanism will not be used. 00:38:28.394 [2024-11-06 15:42:55.779083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.394 [2024-11-06 15:42:55.890661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.963 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:28.963 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:28.963 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:28.963 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:29.222 15:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:29.481 nvme0n1 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:29.481 15:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:29.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:29.740 Zero copy mechanism will not be used. 00:38:29.741 Running I/O for 2 seconds... 00:38:29.741 [2024-11-06 15:42:57.152866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.152916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.152935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.159779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.159813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.167616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.167649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.167662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.174622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.174653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.174667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.182294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.182322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.182335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.191191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.191226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.191240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.198115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.198144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.198157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.204262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.204291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.204304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.209705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.209732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.209744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.215107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.215136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.215148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.220591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.220619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.220630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.226023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.226051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.226063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.231443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.231470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.231482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.236826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.236853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.236865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.242442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.242469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.242481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.247884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.247910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.247921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.253399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.253426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.253439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.258981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.259010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.259027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.264574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.264603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.264615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.270298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.270326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.270338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.275824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.275852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.275864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.281489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.281518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.281529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.287115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.287155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.292748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.292776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.292787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.298271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.298298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.298310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.741 [2024-11-06 15:42:57.303989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.741 [2024-11-06 15:42:57.304017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.741 [2024-11-06 15:42:57.304029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.309618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.309646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.315290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.315318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.315330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.320950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.320978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.320990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.326533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.326561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.326573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.332293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.332321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.332333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.338076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.338104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.338116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.343858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.343886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.343897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.349133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.349162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.349174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.354377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.354405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.354422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.359976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.360004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.360016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.365678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.365705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.365718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:29.742 [2024-11-06 15:42:57.371334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:29.742 [2024-11-06 15:42:57.371362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.742 [2024-11-06 15:42:57.371374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.377113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.377141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.377154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.383077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.383106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.388994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.389033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.394683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.394711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.394722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.400429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.400456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.400468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.406224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.406257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.406268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.412037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.412065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.412077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.417703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.002 [2024-11-06 15:42:57.417732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.002 [2024-11-06 15:42:57.417744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.002 [2024-11-06 15:42:57.423544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.423585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.428923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.428953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.428975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.435126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.435154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.435166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.440907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.440935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.440946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.444685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.444712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.444724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.448929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.448958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.454428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.454456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.454467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.459956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.459985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.459997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.465686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.465713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.465725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.471321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.471348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.471360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.477026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.477053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.477065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.482551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.482581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.482594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.488725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.488753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.488764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.494965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.494993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.495005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.500701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.500733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.500745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.506327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.506368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.512119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.512147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.512158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.517742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.517770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.517782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.523592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.523620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.523632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.528852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.528879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.528892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.534826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.534854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.534867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.540707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.540736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.540748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.546612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.546653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.552473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.552501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.552514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.558080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.558108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.558120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.563758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.563786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.563798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.003 [2024-11-06 15:42:57.569551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.003 [2024-11-06 15:42:57.569578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.003 [2024-11-06 15:42:57.569590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.575450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.575479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.575491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.581177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.581212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.581225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.587296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.587324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.587336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.592929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.592957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.592968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.598620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.598653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.598665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.604528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.604558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.604570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.610327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.610367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.616220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.616249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.616261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.622383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.622412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.622424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.628108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.628138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.628150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.004 [2024-11-06 15:42:57.634431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.004 [2024-11-06 15:42:57.634460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.004 [2024-11-06 15:42:57.634472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.640176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.640210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.640222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.646149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.646178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.652040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.652070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.652082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.658226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.658256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.658268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.664131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.664160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.664173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.670259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.670288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.670300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.677184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.677243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.677255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.684264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.684295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.264 [2024-11-06 15:42:57.684308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.264 [2024-11-06 15:42:57.690525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.264 [2024-11-06 15:42:57.690554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.690566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.696584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.696612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.696624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.702539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.702572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.702585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.710233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.710262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.710275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.716494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.716523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.716536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.722347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.722375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.728136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.728165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.728177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.733904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.733932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.733943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.739579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.739607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.739619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.745310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.745338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.745350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.751060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.751088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.751100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.756938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.756967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.756979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.762928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.762956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.762969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.768774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.768802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.768815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.775054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.775083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.781332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.781361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.781373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.787565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.787606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.793467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.793508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.799442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.799471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.799483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.805273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.805302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.805321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.811149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.811178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.811190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.816941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.816972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.816984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.822775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.822805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.822817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.828596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.828625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.828637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.834131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.834160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.834173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.837180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.837217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.837230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.842815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.842842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.265 [2024-11-06 15:42:57.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.265 [2024-11-06 15:42:57.848658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.265 [2024-11-06 15:42:57.848686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.848697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.854726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.854755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.854766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.860849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.860878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.860890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.866674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.866713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.872572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.872601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.878372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.878401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.878413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.884161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.884191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.888831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.888859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.888871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.892377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.892403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.892415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.266 [2024-11-06 15:42:57.897596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.266 [2024-11-06 15:42:57.897625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.266 [2024-11-06 15:42:57.897642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.903110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.903138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.903150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.908660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.908688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.908700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.914048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.914077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.914089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.919865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.919897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.919910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.925723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.925752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.925764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.931432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.931472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.931486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.937208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.937236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.937248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.942906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.942934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.942946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.949053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.949081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.949095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.526 [2024-11-06 15:42:57.954587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.526 [2024-11-06 15:42:57.954617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.526 [2024-11-06 15:42:57.954629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.960080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.960108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.960122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.965585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.965612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.965624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.971284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.971324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.976908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.976936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.976947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.982540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.982569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.982580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.988067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.988096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.988108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.994182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.994234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:57.999954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:57.999982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:57.999994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.005773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.005802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.005814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.011515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.011544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.011555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.017290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.017332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.023116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.023157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.028800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.028828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.028840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.034560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.034601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.040333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.040363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.040374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.046004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.046033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.046044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.051582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.051610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.051622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.057230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.057258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.057270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.062835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.062865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.062877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.068560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.068587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.068599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.074244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.074273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.074285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.079959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.079988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.079999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.085638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.085665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.085676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.091325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.091353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.091369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.096943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.096971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.102442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.102470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.527 [2024-11-06 15:42:58.102482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.527 [2024-11-06 15:42:58.108032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.527 [2024-11-06 15:42:58.108060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.108072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.111875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.111915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.116043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.116072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.116084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.121358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.121398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.126822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.126850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.126862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.132413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.132441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.132453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.138170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.138222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.528 5339.00 IOPS, 667.38 MiB/s [2024-11-06T14:42:58.166Z] [2024-11-06 15:42:58.145183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.145217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.145229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.150978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.151007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.151019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.528 [2024-11-06 15:42:58.156795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.528 [2024-11-06 15:42:58.156825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.528 [2024-11-06 15:42:58.156837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.162438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.162467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.162487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.168495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.168524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.168536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.174262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.174292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.174304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.180115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.180145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.180156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.185820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.185850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.185866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.191561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.191590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.191601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.197260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.197288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.202932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.202961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.202973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.208641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.208669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.208681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.214406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.214435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.214447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.220183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.220217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.220230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.225947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.225974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.225985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.231771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.788 [2024-11-06 15:42:58.231800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.788 [2024-11-06 15:42:58.231812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.788 [2024-11-06 15:42:58.237710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.237743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.237755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.243458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.243486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.243497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.249569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.249598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.249611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.255241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.255269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.255281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.261435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.261464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.261476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.267197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.267230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.267242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.272896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.272924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.272936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.278554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.278583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.278595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.284367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.284394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.284410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.289711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.289740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.289752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.295742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.295772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.295784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.301442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.301472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.301484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.307002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.307029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.307041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.312788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.312816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.318533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.318561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.318573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.324171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.324218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.329964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.329992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.330004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.335699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.335731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.335744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.341339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.341366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.341378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.346584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.346613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.346625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.352128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.352157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.352169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.357652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.357680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.363216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.363260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.363272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.368758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.368787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.368798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.374251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.374279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.374291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.379727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.379755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.379771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.385252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.385281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.385293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.390810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.390838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.390849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.396312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.396353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.401652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.401680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.401692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.404696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.404754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.404767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.409997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.410026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.410038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.415364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.415390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.789 [2024-11-06 15:42:58.420871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:30.789 [2024-11-06 15:42:58.420899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.789 [2024-11-06 15:42:58.420912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.426274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.426305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.431812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.431839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.431851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.437410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.437437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.437449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.442990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.443018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.448454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.448482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.448493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.453995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.454022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.454034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.459485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.459512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.459524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.465039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.465067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.465078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.470278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.470305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.470317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.475728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.475756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.475767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.481166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.481193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.481211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.487459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.487488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.487500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.495038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.495066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.495078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.502394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.502423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.502436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.510144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.510173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.510186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.516860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.516889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.516901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.524244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.524285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.531766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.531799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.531811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.539762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.539790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.539803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.547737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.547765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.547778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.555328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.555356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.555369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.562970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.563011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.570685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.050 [2024-11-06 15:42:58.570713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.050 [2024-11-06 15:42:58.570726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.050 [2024-11-06 15:42:58.578269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.578298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.578310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.586088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.586118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.586131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.594354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.594383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.594396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.601999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.602029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.602041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.609657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.609688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.609700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.617735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.617764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.617777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.624020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.624049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.624061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.629631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.629658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.629670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.635199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.635234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.635246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.640681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.640708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.640721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.646118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.646145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.646158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.651504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.651553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.656889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.656917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.656929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.662366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.667790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.667818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.667831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.673261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.673288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.673300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.678777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.678806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.678818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.051 [2024-11-06 15:42:58.684165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.051 [2024-11-06 15:42:58.684193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.051 [2024-11-06 15:42:58.684211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.311 [2024-11-06 15:42:58.689652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.311 [2024-11-06 15:42:58.689680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.311 [2024-11-06 15:42:58.689692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.311 [2024-11-06 15:42:58.695252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.311 [2024-11-06 15:42:58.695280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.311 [2024-11-06 15:42:58.695292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.311 [2024-11-06 15:42:58.700738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.311 [2024-11-06 15:42:58.700764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.700776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.706227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.706254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.706265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.711720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.711747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.711759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.717164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.717191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.717209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.722580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.722607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.722619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.728085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.728113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.728124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.733545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.733571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.733583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.738961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.738988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.738999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.744419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.744451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.744463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.749938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.749965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.749976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.755410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.755438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.755450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.760879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.760905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.760916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.766310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.766338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.766350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.771695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.771722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.771734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.777126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.777153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.777165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.782502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.782528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.782540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.787896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.787923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.787935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.793612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.793637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.793649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.799021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.799047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.799059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.804433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.804460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.804471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.809810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.809837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.809848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.815347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.815374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.815386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.820807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.820834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.820846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.826253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.826280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.826291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.831667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.831695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.831707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.837138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.837165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.837181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.842620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.312 [2024-11-06 15:42:58.842647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.312 [2024-11-06 15:42:58.842658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.312 [2024-11-06 15:42:58.848063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.848100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.853533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.853560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.853571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.858992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.859018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.859030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.864530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.864557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.864569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.870043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.870069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.870081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.875516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.875543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.875555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.881423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.881451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.881463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.887327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.887354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.887366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.892789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.892816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.892828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.898251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.898277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.898288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.903332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.903360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.903372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.908672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.908699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.908719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.914096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.914124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.914136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.919476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.919503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.919515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.925008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.925035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.925048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.930530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.930558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.930574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.936137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.936165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.313 [2024-11-06 15:42:58.941696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.313 [2024-11-06 15:42:58.941725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.313 [2024-11-06 15:42:58.941736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.947189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.947225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.947237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.952715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.952742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.958172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.958199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.958218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.963563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.963600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.963611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.968964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.968991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.969003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.975138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.975165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.975178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.980859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.980886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.980898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.986376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.986404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.986415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.991919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.991947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.991959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:58.997356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:58.997383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:58.997395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:59.002965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:59.002993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.573 [2024-11-06 15:42:59.003005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.573 [2024-11-06 15:42:59.008368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.573 [2024-11-06 15:42:59.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.008408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.013862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.013889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.019328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.019356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.019368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.024742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.024772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.024789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.030196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.030232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.030245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.035697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.035726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.035739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.041157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.041185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.041198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.046646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.046674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.046686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.052103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.052143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.057444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.057471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.062865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.062904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.068443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.068471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.068483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.074013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.074042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.079590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.079617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.079629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.085761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.091429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.091457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.091471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.096930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.096959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.096971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.102426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.102456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.102469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.107901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.107929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.113436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.113474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.113487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.118956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.118983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.119000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.124453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.124482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.124494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.129888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.129916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.129929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.135370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.135399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.135411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.574 [2024-11-06 15:42:59.140807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500032e680) 00:38:31.574 [2024-11-06 15:42:59.140837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.574 [2024-11-06 15:42:59.140849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.574 5369.00 IOPS, 671.12 MiB/s 00:38:31.574 Latency(us) 00:38:31.574 [2024-11-06T14:42:59.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:31.574 nvme0n1 : 2.00 5369.82 671.23 0.00 0.00 2976.76 694.37 12483.05 00:38:31.574 [2024-11-06T14:42:59.212Z] =================================================================================================================== 00:38:31.574 [2024-11-06T14:42:59.212Z] Total : 5369.82 671.23 0.00 0.00 2976.76 694.37 12483.05 00:38:31.574 { 00:38:31.574 "results": [ 00:38:31.574 { 00:38:31.574 "job": "nvme0n1", 00:38:31.574 "core_mask": "0x2", 00:38:31.574 "workload": "randread", 00:38:31.574 "status": "finished", 00:38:31.574 "queue_depth": 16, 00:38:31.574 "io_size": 131072, 00:38:31.574 "runtime": 2.002676, 00:38:31.575 "iops": 5369.815187279421, 00:38:31.575 "mibps": 671.2268984099276, 00:38:31.575 "io_failed": 0, 00:38:31.575 "io_timeout": 0, 00:38:31.575 "avg_latency_us": 2976.7629306481754, 00:38:31.575 "min_latency_us": 694.3695238095238, 00:38:31.575 "max_latency_us": 12483.047619047618 00:38:31.575 } 00:38:31.575 ], 00:38:31.575 "core_count": 1 00:38:31.575 } 00:38:31.575 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:31.575 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:31.575 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:31.575 | .driver_specific 00:38:31.575 | .nvme_error 00:38:31.575 | .status_code 00:38:31.575 | .command_transient_transport_error' 00:38:31.575 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 346 > 0 )) 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4096721 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4096721 ']' 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4096721 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4096721 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4096721' 00:38:31.834 killing process with pid 4096721 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4096721 00:38:31.834 Received shutdown signal, test time was about 2.000000 seconds 00:38:31.834 00:38:31.834 Latency(us) 00:38:31.834 [2024-11-06T14:42:59.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.834 [2024-11-06T14:42:59.472Z] =================================================================================================================== 00:38:31.834 [2024-11-06T14:42:59.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:31.834 15:42:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4096721 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4097430 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4097430 /var/tmp/bperf.sock 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4097430 ']' 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:32.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:32.770 15:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:32.770 [2024-11-06 15:43:00.363199] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:32.770 [2024-11-06 15:43:00.363289] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097430 ] 00:38:33.030 [2024-11-06 15:43:00.488647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.030 [2024-11-06 15:43:00.604592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.598 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:33.598 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:33.598 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:33.598 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:33.857 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:34.116 nvme0n1 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:34.116 15:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:34.375 Running I/O for 2 seconds... 00:38:34.375 [2024-11-06 15:43:01.828055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:38:34.375 [2024-11-06 15:43:01.829125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.375 [2024-11-06 15:43:01.829165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:34.375 [2024-11-06 15:43:01.838750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:38:34.375 [2024-11-06 15:43:01.839283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.375 [2024-11-06 15:43:01.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:34.375 [2024-11-06 15:43:01.851932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:38:34.375 [2024-11-06 15:43:01.853660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.375 [2024-11-06 15:43:01.853689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:34.375 [2024-11-06 15:43:01.859316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:38:34.375 [2024-11-06 15:43:01.860101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.375 [2024-11-06 15:43:01.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:34.375 [2024-11-06 15:43:01.869700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:38:34.375 [2024-11-06 15:43:01.870543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.375 [2024-11-06 15:43:01.870571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:34.375 [2024-11-06 15:43:01.882485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:38:34.376 [2024-11-06 15:43:01.883968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.883996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.893070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:38:34.376 [2024-11-06 15:43:01.894561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.894587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.902030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:38:34.376 [2024-11-06 15:43:01.902894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.902920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.911620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:38:34.376 [2024-11-06 15:43:01.912561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.912589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.924470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:38:34.376 [2024-11-06 15:43:01.925973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.925999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.934311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:38:34.376 [2024-11-06 15:43:01.935391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.935418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.946180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:38:34.376 [2024-11-06 15:43:01.947884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.947911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.956067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:38:34.376 [2024-11-06 15:43:01.957607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.963317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:38:34.376 [2024-11-06 15:43:01.964028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.964054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.974245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:38:34.376 [2024-11-06 15:43:01.975111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.975139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.986820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:38:34.376 [2024-11-06 15:43:01.988013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.988039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:01.995636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:38:34.376 [2024-11-06 15:43:01.996135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:01.996161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:34.376 [2024-11-06 15:43:02.009017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:38:34.376 [2024-11-06 15:43:02.010781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.376 [2024-11-06 15:43:02.010808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:34.635 [2024-11-06 15:43:02.016817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:38:34.635 [2024-11-06 15:43:02.017737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.635 [2024-11-06 15:43:02.017763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:34.635 [2024-11-06 15:43:02.029526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:38:34.635 [2024-11-06 15:43:02.031025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.635 [2024-11-06 15:43:02.031051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.635 [2024-11-06 15:43:02.038960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:38:34.635 [2024-11-06 15:43:02.040455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.635 [2024-11-06 15:43:02.040485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:34.635 [2024-11-06 15:43:02.049840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:38:34.635 [2024-11-06 15:43:02.050761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.635 [2024-11-06 15:43:02.050788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:34.635 [2024-11-06 15:43:02.060630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:38:34.635 [2024-11-06 15:43:02.061781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.635 [2024-11-06 15:43:02.061809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.070811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:38:34.636 [2024-11-06 15:43:02.072084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.072112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.080760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:38:34.636 [2024-11-06 15:43:02.081986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.082013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.091963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:38:34.636 [2024-11-06 15:43:02.093334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.093361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.102565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:38:34.636 [2024-11-06 15:43:02.103463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.103490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.112496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:38:34.636 [2024-11-06 15:43:02.113949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.113975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.123563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:38:34.636 [2024-11-06 15:43:02.124481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.124506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.133576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:38:34.636 [2024-11-06 15:43:02.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.134644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.143870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:38:34.636 [2024-11-06 15:43:02.145123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.145150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.154350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:38:34.636 [2024-11-06 15:43:02.155078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.155104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.164155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:38:34.636 [2024-11-06 15:43:02.164809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.164834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.175146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:38:34.636 [2024-11-06 15:43:02.175884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.175910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.185250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:38:34.636 [2024-11-06 15:43:02.186278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.186304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.195691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:38:34.636 [2024-11-06 15:43:02.196797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.196823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.205951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:38:34.636 [2024-11-06 15:43:02.206715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.206742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.216101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:38:34.636 [2024-11-06 15:43:02.216942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.216970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.225871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:38:34.636 [2024-11-06 15:43:02.226692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.226718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.238665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:38:34.636 [2024-11-06 15:43:02.239980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.240006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.249573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:38:34.636 [2024-11-06 15:43:02.251015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.251041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.260150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:38:34.636 [2024-11-06 15:43:02.261675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.261701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:34.636 [2024-11-06 15:43:02.267864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:38:34.636 [2024-11-06 15:43:02.268778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.636 [2024-11-06 15:43:02.268804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.281169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:38:34.896 [2024-11-06 15:43:02.282632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.282659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.290733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:38:34.896 [2024-11-06 15:43:02.292012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.292040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.299666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:38:34.896 [2024-11-06 15:43:02.300434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.300462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.312546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:38:34.896 [2024-11-06 15:43:02.313774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.313802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.323255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:38:34.896 [2024-11-06 15:43:02.324587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.324614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.333398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:38:34.896 [2024-11-06 15:43:02.334490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.896 [2024-11-06 15:43:02.334516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:34.896 [2024-11-06 15:43:02.344847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:38:34.896 [2024-11-06 15:43:02.346078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.346105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.355389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:38:34.897 [2024-11-06 15:43:02.356616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.356642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.365339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:38:34.897 [2024-11-06 15:43:02.366430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.366456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.375217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:38:34.897 [2024-11-06 15:43:02.376117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.376142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.387660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:38:34.897 [2024-11-06 15:43:02.389141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.389167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.397226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:38:34.897 [2024-11-06 15:43:02.398696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.398722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.406221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:38:34.897 [2024-11-06 15:43:02.407051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.407077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.418832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:38:34.897 [2024-11-06 15:43:02.420070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.420097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.429363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:38:34.897 [2024-11-06 15:43:02.430730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.430756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.439951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:38:34.897 [2024-11-06 15:43:02.441349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.441375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.449598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:38:34.897 [2024-11-06 15:43:02.450526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.450552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.460745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:38:34.897 [2024-11-06 15:43:02.461872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.461898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.470967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:38:34.897 [2024-11-06 15:43:02.472095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.472122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.483073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:38:34.897 [2024-11-06 15:43:02.484760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.484787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.490719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:38:34.897 [2024-11-06 15:43:02.491584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.491614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.502302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:38:34.897 [2024-11-06 15:43:02.503362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.503388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.512735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:38:34.897 [2024-11-06 15:43:02.513746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.513773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:34.897 [2024-11-06 15:43:02.523220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:38:34.897 [2024-11-06 15:43:02.524250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.897 [2024-11-06 15:43:02.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.534214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:38:35.157 [2024-11-06 15:43:02.535399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.535426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.544058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:38:35.157 [2024-11-06 15:43:02.545038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.545063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.554061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:38:35.157 [2024-11-06 15:43:02.555032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.555058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.565006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:38:35.157 [2024-11-06 15:43:02.566116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.566141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.575930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:38:35.157 [2024-11-06 15:43:02.577172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.577197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.586818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:38:35.157 [2024-11-06 15:43:02.588261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.588288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.598129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:38:35.157 [2024-11-06 15:43:02.599751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.599777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.607490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:38:35.157 [2024-11-06 15:43:02.608243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.608269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.617332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:38:35.157 [2024-11-06 15:43:02.617886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.617911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.627981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:38:35.157 [2024-11-06 15:43:02.628817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.639526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.157 [2024-11-06 15:43:02.639717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.639741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.650534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.157 [2024-11-06 15:43:02.650727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.650751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.661598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.157 [2024-11-06 15:43:02.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.661816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.157 [2024-11-06 15:43:02.672621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.157 [2024-11-06 15:43:02.672810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.157 [2024-11-06 15:43:02.672837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.683681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.683873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.683896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.694724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.694913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.694936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.705717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.705905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.705926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.716780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.716970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.716994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.728072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.728274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.739092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.739298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.739322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.750107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.750307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.750332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.761121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.761319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.761343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.772160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.772389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.158 [2024-11-06 15:43:02.783185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.158 [2024-11-06 15:43:02.783383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.158 [2024-11-06 15:43:02.783407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.794454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.794645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.794668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.805636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.805826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.805850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.816672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.816873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.816896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 24009.00 IOPS, 93.79 MiB/s [2024-11-06T14:43:03.056Z] [2024-11-06 15:43:02.827913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.828104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.828128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.838983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.839171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.839196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.850262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.850454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.861497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.861688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.861721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.872583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.872772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.883629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.883819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.883843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.894642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.905702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.905894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.916699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.916914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.927783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.927974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.927998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.938798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.938989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.939013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.949814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.950028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.960904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.961094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.961124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.971953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.972144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.972168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.982979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.983169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.418 [2024-11-06 15:43:02.983192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.418 [2024-11-06 15:43:02.994015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.418 [2024-11-06 15:43:02.994213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:02.994236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.419 [2024-11-06 15:43:03.005236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.419 [2024-11-06 15:43:03.005430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:03.005455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.419 [2024-11-06 15:43:03.016253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.419 [2024-11-06 15:43:03.016448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:03.016472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.419 [2024-11-06 15:43:03.027308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.419 [2024-11-06 15:43:03.027501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:03.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.419 [2024-11-06 15:43:03.038338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.419 [2024-11-06 15:43:03.038532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:03.038556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.419 [2024-11-06 15:43:03.049477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.419 [2024-11-06 15:43:03.049673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.419 [2024-11-06 15:43:03.049697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.678 [2024-11-06 15:43:03.060757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.678 [2024-11-06 15:43:03.060948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.678 [2024-11-06 15:43:03.060972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.678 [2024-11-06 15:43:03.071854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.678 [2024-11-06 15:43:03.072045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.678 [2024-11-06 15:43:03.072069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.678 [2024-11-06 15:43:03.082908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.678 [2024-11-06 15:43:03.083097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.678 [2024-11-06 15:43:03.083121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.678 [2024-11-06 15:43:03.093925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.678 [2024-11-06 15:43:03.094130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.678 [2024-11-06 15:43:03.094154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.678 [2024-11-06 15:43:03.105218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.678 [2024-11-06 15:43:03.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.678 [2024-11-06 15:43:03.105437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.116399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.116612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.116638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.127571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.127760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.127784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.138597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.138781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.138804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.149571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.149762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.149789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.160597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.160813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.171629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.171820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.182699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.182892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.182916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.193744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.193937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.193963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.204965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.205157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.205180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.215963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.216176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.227023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.227221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.227245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.238077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.238276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.238300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.249340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.249543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.249567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.260404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.260596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.260620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.271432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.271629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.271652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.282516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.282727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.293556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.293749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.293772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.679 [2024-11-06 15:43:03.304578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.679 [2024-11-06 15:43:03.304767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.679 [2024-11-06 15:43:03.304790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.315746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.315936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.315959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.326869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.327060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.327084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.337940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.338130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.338153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.348919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.349112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.349136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.360225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.360417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.360440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.371396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.371587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.371611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.382500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.382713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.393509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.393697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.393721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.404505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.404695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.404718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.415503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.415698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.415721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.426828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.427021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.427044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.437910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.438103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.438127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.448904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.449094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.449117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.459921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.460109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.460132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.470919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.471110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.471132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.481955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.939 [2024-11-06 15:43:03.482147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.939 [2024-11-06 15:43:03.482170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.939 [2024-11-06 15:43:03.492970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.493161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.493184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.503962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.504157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.504180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.514986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.515178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.515209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.526036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.526242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.526265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.537163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.537368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.548418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.548607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.548630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.559424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.559614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.559637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:35.940 [2024-11-06 15:43:03.570474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:35.940 [2024-11-06 15:43:03.570672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.940 [2024-11-06 15:43:03.570696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.581850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.582041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.582064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.592844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.593035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.593058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.603874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.604063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.604086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.615185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.615385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.615408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.626457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.626650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.626677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.637521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.637713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.637737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.648518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.648711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.648734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.659519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.659711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.659734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.670539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.670729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.670752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.681563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.681775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.692592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.692785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.692808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.703603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.703794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.703817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.714607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.714797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.725640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.725837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.725861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.736674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.736872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.736895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.747698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.747898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.747922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.758922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.759113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.759136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.769939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.770127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.770150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.780987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.781179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.781209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.792030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.792225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.803002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.200 [2024-11-06 15:43:03.803193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.200 [2024-11-06 15:43:03.803225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.200 [2024-11-06 15:43:03.814069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.201 [2024-11-06 15:43:03.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.201 [2024-11-06 15:43:03.814290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.201 23539.50 IOPS, 91.95 MiB/s [2024-11-06T14:43:03.839Z] [2024-11-06 15:43:03.825037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:38:36.201 [2024-11-06 15:43:03.825232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.201 [2024-11-06 15:43:03.825256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:36.201 00:38:36.201 Latency(us) 00:38:36.201 [2024-11-06T14:43:03.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.201 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.201 nvme0n1 : 2.01 23537.43 91.94 0.00 0.00 5428.09 2293.76 14105.84 00:38:36.201 [2024-11-06T14:43:03.839Z] =================================================================================================================== 00:38:36.201 [2024-11-06T14:43:03.839Z] Total : 23537.43 91.94 0.00 0.00 5428.09 2293.76 14105.84 00:38:36.201 { 00:38:36.201 "results": [ 00:38:36.201 { 00:38:36.201 "job": "nvme0n1", 00:38:36.201 "core_mask": "0x2", 00:38:36.201 "workload": "randwrite", 00:38:36.201 "status": "finished", 00:38:36.201 "queue_depth": 128, 00:38:36.201 "io_size": 4096, 00:38:36.201 "runtime": 2.005274, 00:38:36.201 "iops": 23537.43179236354, 00:38:36.201 "mibps": 91.94309293892007, 00:38:36.201 "io_failed": 0, 00:38:36.201 "io_timeout": 0, 00:38:36.201 "avg_latency_us": 5428.088249973011, 00:38:36.201 "min_latency_us": 2293.76, 00:38:36.201 "max_latency_us": 14105.843809523809 00:38:36.201 } 00:38:36.201 ], 00:38:36.201 "core_count": 1 00:38:36.201 } 00:38:36.460 15:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:36.460 15:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:36.460 15:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:36.460 15:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:36.460 | .driver_specific 00:38:36.460 | .nvme_error 00:38:36.460 | .status_code 00:38:36.460 | .command_transient_transport_error' 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 185 > 0 )) 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4097430 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4097430 ']' 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4097430 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4097430 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4097430' 00:38:36.460 killing process with pid 4097430 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4097430 00:38:36.460 Received shutdown signal, test time was about 2.000000 seconds 00:38:36.460 00:38:36.460 Latency(us) 00:38:36.460 [2024-11-06T14:43:04.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.460 [2024-11-06T14:43:04.098Z] =================================================================================================================== 00:38:36.460 [2024-11-06T14:43:04.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:36.460 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4097430 00:38:37.397 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:37.397 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4098132 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4098132 /var/tmp/bperf.sock 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 4098132 ']' 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:37.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:37.398 15:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.398 [2024-11-06 15:43:05.024198] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:37.398 [2024-11-06 15:43:05.024293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098132 ] 00:38:37.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:37.398 Zero copy mechanism will not be used. 00:38:37.657 [2024-11-06 15:43:05.147431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.657 [2024-11-06 15:43:05.258734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.224 15:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:38.224 15:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:38:38.224 15:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:38.224 15:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:38.483 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:38.741 nvme0n1 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:39.001 15:43:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:39.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:39.001 Zero copy mechanism will not be used. 00:38:39.001 Running I/O for 2 seconds... 00:38:39.001 [2024-11-06 15:43:06.499344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.499680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.499722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.505482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.505801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.505835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.512431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.512765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.512797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.519347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.519659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.519690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.526155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.526474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.526503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.533224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.533552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.533581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.539786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.540106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.540135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.545591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.545905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.545941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.551592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.551904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.551932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.557829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.558129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.558156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.563503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.563814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.563842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.569633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.569948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.569975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.575833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.576148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.576176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.581473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.581791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.581818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.586979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.587307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.587340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.592608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.592924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.592951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.598177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.001 [2024-11-06 15:43:06.598276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.001 [2024-11-06 15:43:06.604144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.001 [2024-11-06 15:43:06.604464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.604493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.002 [2024-11-06 15:43:06.609661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.002 [2024-11-06 15:43:06.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.609992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.002 [2024-11-06 15:43:06.615413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.002 [2024-11-06 15:43:06.615732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.615760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.002 [2024-11-06 15:43:06.621262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.002 [2024-11-06 15:43:06.621581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.621609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.002 [2024-11-06 15:43:06.627742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.002 [2024-11-06 15:43:06.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.628084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.002 [2024-11-06 15:43:06.633812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.002 [2024-11-06 15:43:06.634135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.002 [2024-11-06 15:43:06.634164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.639736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.640061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.645913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.646242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.646269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.651767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.652091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.652119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.657474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.657790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.657818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.663280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.663588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.663616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.669530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.669845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.669874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.675369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.675694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.675722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.681484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.681816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.681844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.687820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.688137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.688165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.693870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.694188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.699904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.700227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.700255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.262 [2024-11-06 15:43:06.705710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.262 [2024-11-06 15:43:06.706024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.262 [2024-11-06 15:43:06.706051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.711506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.711822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.711851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.716831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.717143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.717171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.722216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.722552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.722580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.727742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.728053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.728081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.733020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.733345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.733373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.738453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.738776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.738805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.744360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.744681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.744709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.750196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.750563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.750607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.756008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.756338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.756367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.762382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.762709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.762738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.768366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.768695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.768723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.774200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.774542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.774570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.780708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.781022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.781050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.786615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.786927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.786955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.792505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.792816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.792845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.798785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.799122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.799150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.804368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.804687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.804715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.809796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.810110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.810137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.815159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.815501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.820620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.820933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.820961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.826038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.826362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.826389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.831464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.831783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.831811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.836855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.837168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.837206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.842211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.842522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.842549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.847692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.848006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.848034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.853473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.853788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.853815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.263 [2024-11-06 15:43:06.859622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.263 [2024-11-06 15:43:06.859939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.263 [2024-11-06 15:43:06.859967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.865323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.865641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.865669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.871051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.871379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.871407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.876744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.877067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.877094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.882489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.882803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.887838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.888156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.888183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.264 [2024-11-06 15:43:06.893543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.264 [2024-11-06 15:43:06.893862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.264 [2024-11-06 15:43:06.893889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.899781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.899864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.899889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.905510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.905870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.911037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.911355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.916651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.916967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.917002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.923068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.923408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.923436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.929108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.929411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.929440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.934971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.935311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.940941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.941261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.941288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.947466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.947794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.947821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.954708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.955038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.955066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.961256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.961585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.961613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.966938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.967004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.967029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.973327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.973643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.973671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.979067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.979415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.984440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.984758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.984785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.989851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.990164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.990191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:06.995259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:06.995590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:06.995617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.000638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.000955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.000998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.006277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.006597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.006625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.011773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.012091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.012119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.017214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.017535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.017563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.023051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.023385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.023413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.029005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.029359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.029397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.034510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.034827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.034859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.524 [2024-11-06 15:43:07.039898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.524 [2024-11-06 15:43:07.040219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.524 [2024-11-06 15:43:07.040246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.045299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.045612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.045641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.050629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.050943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.050970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.056008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.056338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.056365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.061353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.061689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.061716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.066814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.067130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.067157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.072138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.072460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.072487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.077442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.077775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.082736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.083081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.088007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.088345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.088372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.093278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.093596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.093623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.098570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.098904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.098931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.103877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.104192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.104227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.110054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.110407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.116665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.116961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.116990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.122063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.122364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.122392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.127461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.127753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.127785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.132893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.133193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.133227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.138170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.138474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.143361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.143655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.143682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.148642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.148937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.148964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.525 [2024-11-06 15:43:07.153767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.525 [2024-11-06 15:43:07.154073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.525 [2024-11-06 15:43:07.154100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.159284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.159584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.165431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.165722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.165750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.170712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.171019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.176287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.176601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.182714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.182994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.183021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.189018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.189321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.189348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.195071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.195426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.200892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.201172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.201199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.206859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.207131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.207158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.213473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.213863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.213891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.220677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.220982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.221010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.226577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.226853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.226881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.232163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.232449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.232477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.237524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.237804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.237832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.243592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.243894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.243921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.249997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.250284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.250312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.256400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.256687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.256714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.262889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.263182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.263218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.269335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.269721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.785 [2024-11-06 15:43:07.269750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.785 [2024-11-06 15:43:07.275802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.785 [2024-11-06 15:43:07.276090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.276119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.282089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.282497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.282524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.289044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.289346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.289381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.295353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.295661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.301437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.301758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.301786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.307831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.308122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.308149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.315109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.315477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.315505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.322172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.322461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.322489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.328573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.328873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.328900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.335646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.335949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.335976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.341795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.342074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.342102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.347322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.347600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.347627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.352539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.357853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.358135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.358163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.362951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.363238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.363265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.368720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.369003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.369030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.374378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.374657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.380485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.380792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.386401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.386687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.386719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.393333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.393610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.393638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.400176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.400474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.406053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.406351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.411919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.412198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.412233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.786 [2024-11-06 15:43:07.417461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:39.786 [2024-11-06 15:43:07.417746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.786 [2024-11-06 15:43:07.417775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.422867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.423152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.423180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.428986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.429319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.429347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.435004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.435291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.435320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.440737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.441030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.441060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.446829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.447142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.447169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.453126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.453424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.453453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.459398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.459682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.459710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.465589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.465927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.465954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.471870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.472225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.472253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.478395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.478708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.483885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.484163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.046 5224.00 IOPS, 653.00 MiB/s [2024-11-06T14:43:07.684Z] [2024-11-06 15:43:07.490560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.490841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.490874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.496234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.496517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.496545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.046 [2024-11-06 15:43:07.501621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.046 [2024-11-06 15:43:07.501908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.046 [2024-11-06 15:43:07.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.507306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.507585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.507630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.512720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.513004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.513032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.518303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.518589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.518617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.523790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.524075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.529178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.529481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.529509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.534562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.534838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.534866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.539819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.540099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.540126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.545099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.545396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.545424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.550450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.550730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.550758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.555768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.556048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.556075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.560983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.561274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.566428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.566743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.571756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.572041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.572068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.577005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.577292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.577319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.582224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.582505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.582537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.587644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.587924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.587952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.592969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.593255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.593282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.598379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.598660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.598687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.603888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.604174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.604207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.609051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.609343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.609371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.614393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.614671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.614699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.619822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.620102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.625227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.625540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.625568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.631564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.631913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.638058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.638374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.638401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.644464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.644779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.047 [2024-11-06 15:43:07.644806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.047 [2024-11-06 15:43:07.651171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.047 [2024-11-06 15:43:07.651460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.048 [2024-11-06 15:43:07.651487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.048 [2024-11-06 15:43:07.658120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.048 [2024-11-06 15:43:07.658451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.048 [2024-11-06 15:43:07.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.048 [2024-11-06 15:43:07.665755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.048 [2024-11-06 15:43:07.666072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.048 [2024-11-06 15:43:07.666110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.048 [2024-11-06 15:43:07.672584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.048 [2024-11-06 15:43:07.672863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.048 [2024-11-06 15:43:07.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.048 [2024-11-06 15:43:07.678992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.048 [2024-11-06 15:43:07.679285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.048 [2024-11-06 15:43:07.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.684614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.684892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.684923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.690164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.690470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.690497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.695344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.695620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.695647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.700663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.700970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.706072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.706355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.706382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.711648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.711926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.711953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.717425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.717708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.717736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.722733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.723014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.723043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.727794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.728073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.728100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.733343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.733627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.733654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.738875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.739154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.739182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.744052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.744339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.744367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.749486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.749766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.749794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.755067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.755351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.755379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.761200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.761505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.766737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.767021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.767049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.772475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.772780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.779154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.779450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.779482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.784567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.784845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.784873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.789999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.790319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.795303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.795611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.800599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.800888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.805878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.806161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.806189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.811365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.811645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.811673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.817645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.817994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.818023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.308 [2024-11-06 15:43:07.824843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.308 [2024-11-06 15:43:07.825125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.308 [2024-11-06 15:43:07.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.830960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.831265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.831292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.838429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.838713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.838741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.844100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.844387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.844414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.849547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.849826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.849854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.854749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.855031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.855059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.860192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.860483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.860511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.866298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.866582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.866610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.871815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.872094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.872123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.877340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.877619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.877647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.883388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.883675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.883703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.889033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.889322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.889350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.894814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.895098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.895127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.900670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.900954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.900983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.906395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.906676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.906703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.912950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.913244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.913273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.918527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.918809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.918837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.924035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.924322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.924350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.929638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.929925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.929954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.935460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.935742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.935770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.309 [2024-11-06 15:43:07.941179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.309 [2024-11-06 15:43:07.941473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.309 [2024-11-06 15:43:07.941502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.947312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.947600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.947628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.953112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.953416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.953456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.958961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.959249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.959278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.964705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.964991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.965019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.970634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.970918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.970946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.976229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.976542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.981789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.982078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.982107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.987618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.987902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.987929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.993279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.993566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.993594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:07.998851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:07.999130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:07.999158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:08.004728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:08.005013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:08.005041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:08.010505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:08.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:08.010815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:08.015842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:08.016122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:08.016151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:08.021001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:08.021292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.569 [2024-11-06 15:43:08.021322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.569 [2024-11-06 15:43:08.026383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.569 [2024-11-06 15:43:08.026673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.026702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.031640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.031923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.031959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.036988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.037275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.042277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.042554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.042582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.047490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.047783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.047811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.052673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.052955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.052982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.058493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.058889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.058918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.065171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.065512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.072018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.072370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.072399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.079442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.079777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.079806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.086014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.086300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.086328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.091686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.091969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.091997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.097229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.097538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.102881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.103191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.103227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.108918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.115020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.115309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.115338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.120505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.120802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.120830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.126471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.126758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.126792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.133570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.133853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.133882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.139666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.139960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.145261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.145552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.145581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.151169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.151458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.151487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.157247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.157529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.157558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.163135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.163463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.169025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.169311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.570 [2024-11-06 15:43:08.169340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.570 [2024-11-06 15:43:08.175308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.570 [2024-11-06 15:43:08.175591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.571 [2024-11-06 15:43:08.175620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.571 [2024-11-06 15:43:08.180922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.571 [2024-11-06 15:43:08.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.571 [2024-11-06 15:43:08.181242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.571 [2024-11-06 15:43:08.187169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.571 [2024-11-06 15:43:08.187470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.571 [2024-11-06 15:43:08.187498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.571 [2024-11-06 15:43:08.193839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.571 [2024-11-06 15:43:08.194137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.571 [2024-11-06 15:43:08.194165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.571 [2024-11-06 15:43:08.200353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.571 [2024-11-06 15:43:08.200672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.571 [2024-11-06 15:43:08.200701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.206497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.206801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.206830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.213302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.213598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.213627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.219452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.219850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.219880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.226468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.226766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.226794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.233455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.233840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.233873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.240708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.241008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.247646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.248024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.248052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.254806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.255176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.255211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.261707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.262051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.268974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.269299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.269327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.276432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.276765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.283716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.284118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.290843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.291164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.291193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.297722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.298054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.298083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.304674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.305042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.305070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.311465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.311801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.311830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.318834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.319176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.319211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.831 [2024-11-06 15:43:08.326077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.831 [2024-11-06 15:43:08.326362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.831 [2024-11-06 15:43:08.326390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.331891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.332168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.332196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.337431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.337711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.337738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.342897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.343197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.343232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.348984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.349269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.349304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.355247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.355524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.355552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.360638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.360914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.360942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.366005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.366291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.366319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.371524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.371801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.371829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.376863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.377142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.377169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.382156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.382437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.387771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.388054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.388082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.393533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.393813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.399086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.399374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.404815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.405092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.405119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.410339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.410623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.410651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.416605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.416969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.423842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.424152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.424179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.430901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.431254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.431311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.438196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.438613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.445552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.445851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.445879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.451893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.452238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.452271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.458103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.458410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.458439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.832 [2024-11-06 15:43:08.464460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:40.832 [2024-11-06 15:43:08.464781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.832 [2024-11-06 15:43:08.464810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.091 [2024-11-06 15:43:08.470735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:41.091 [2024-11-06 15:43:08.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.091 [2024-11-06 15:43:08.471087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:41.091 [2024-11-06 15:43:08.477034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:41.091 [2024-11-06 15:43:08.477335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.091 [2024-11-06 15:43:08.477363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:41.091 [2024-11-06 15:43:08.483181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:41.091 [2024-11-06 15:43:08.483582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.091 [2024-11-06 15:43:08.483610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:41.091 [2024-11-06 15:43:08.489812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:38:41.091 [2024-11-06 15:43:08.490811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.091 [2024-11-06 15:43:08.490840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:41.091 5222.50 IOPS, 652.81 MiB/s 00:38:41.091 Latency(us) 00:38:41.091 [2024-11-06T14:43:08.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:41.091 nvme0n1 : 2.00 5219.27 652.41 0.00 0.00 3060.21 2387.38 12295.80 00:38:41.091 [2024-11-06T14:43:08.729Z] =================================================================================================================== 00:38:41.091 [2024-11-06T14:43:08.729Z] Total : 5219.27 652.41 0.00 0.00 3060.21 2387.38 12295.80 00:38:41.091 { 00:38:41.091 "results": [ 00:38:41.091 { 00:38:41.091 "job": "nvme0n1", 00:38:41.091 "core_mask": "0x2", 00:38:41.091 "workload": "randwrite", 00:38:41.091 "status": "finished", 00:38:41.091 "queue_depth": 16, 00:38:41.091 "io_size": 131072, 00:38:41.091 "runtime": 2.004112, 00:38:41.091 "iops": 5219.269182560655, 00:38:41.091 "mibps": 652.4086478200819, 00:38:41.091 "io_failed": 0, 00:38:41.091 "io_timeout": 0, 00:38:41.091 "avg_latency_us": 3060.209874897569, 00:38:41.091 "min_latency_us": 2387.382857142857, 00:38:41.091 "max_latency_us": 12295.801904761905 00:38:41.091 } 00:38:41.091 ], 00:38:41.091 "core_count": 1 00:38:41.091 } 00:38:41.091 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:41.091 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:41.092 | .driver_specific 00:38:41.092 | .nvme_error 00:38:41.092 | .status_code 00:38:41.092 | .command_transient_transport_error' 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 337 > 0 )) 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4098132 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4098132 ']' 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4098132 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:41.092 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4098132 00:38:41.350 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:41.350 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:41.350 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4098132' 00:38:41.350 killing process with pid 4098132 00:38:41.351 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4098132 00:38:41.351 Received shutdown signal, test time was about 2.000000 seconds 00:38:41.351 00:38:41.351 Latency(us) 00:38:41.351 [2024-11-06T14:43:08.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.351 [2024-11-06T14:43:08.989Z] =================================================================================================================== 00:38:41.351 [2024-11-06T14:43:08.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:41.351 15:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4098132 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4095767 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 4095767 ']' 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 4095767 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4095767 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4095767' 00:38:42.288 killing process with pid 4095767 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 4095767 00:38:42.288 15:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 4095767 00:38:43.224 00:38:43.224 real 0m21.072s 00:38:43.224 user 0m39.401s 00:38:43.224 sys 0m4.946s 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:43.224 ************************************ 00:38:43.224 END TEST nvmf_digest_error 00:38:43.224 ************************************ 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.224 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.224 rmmod nvme_tcp 00:38:43.224 rmmod nvme_fabrics 00:38:43.483 rmmod nvme_keyring 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4095767 ']' 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4095767 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 4095767 ']' 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 4095767 00:38:43.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (4095767) - No such process 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 4095767 is not found' 00:38:43.483 Process with pid 4095767 is not found 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.483 15:43:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.389 15:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.389 00:38:45.389 real 0m51.797s 00:38:45.389 user 1m23.353s 00:38:45.389 sys 0m14.377s 00:38:45.389 15:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:45.389 15:43:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:45.389 ************************************ 00:38:45.389 END TEST nvmf_digest 00:38:45.389 ************************************ 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:45.389 15:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.649 ************************************ 00:38:45.649 START TEST nvmf_bdevperf 00:38:45.649 ************************************ 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:45.649 * Looking for test storage... 00:38:45.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.649 --rc genhtml_branch_coverage=1 00:38:45.649 --rc genhtml_function_coverage=1 00:38:45.649 --rc genhtml_legend=1 00:38:45.649 --rc geninfo_all_blocks=1 00:38:45.649 --rc geninfo_unexecuted_blocks=1 00:38:45.649 00:38:45.649 ' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.649 --rc genhtml_branch_coverage=1 00:38:45.649 --rc genhtml_function_coverage=1 00:38:45.649 --rc genhtml_legend=1 00:38:45.649 --rc geninfo_all_blocks=1 00:38:45.649 --rc geninfo_unexecuted_blocks=1 00:38:45.649 00:38:45.649 ' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.649 --rc genhtml_branch_coverage=1 00:38:45.649 --rc genhtml_function_coverage=1 00:38:45.649 --rc genhtml_legend=1 00:38:45.649 --rc geninfo_all_blocks=1 00:38:45.649 --rc geninfo_unexecuted_blocks=1 00:38:45.649 00:38:45.649 ' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.649 --rc genhtml_branch_coverage=1 00:38:45.649 --rc genhtml_function_coverage=1 00:38:45.649 --rc genhtml_legend=1 00:38:45.649 --rc geninfo_all_blocks=1 00:38:45.649 --rc geninfo_unexecuted_blocks=1 00:38:45.649 00:38:45.649 ' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.649 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:45.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.650 15:43:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:52.211 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:52.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:52.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:52.212 Found net devices under 0000:86:00.0: cvl_0_0 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:52.212 Found net devices under 0000:86:00.1: cvl_0_1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:52.212 15:43:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:52.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:38:52.212 00:38:52.212 --- 10.0.0.2 ping statistics --- 00:38:52.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.212 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:38:52.212 00:38:52.212 --- 10.0.0.1 ping statistics --- 00:38:52.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.212 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4102588 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4102588 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4102588 ']' 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:52.212 15:43:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.212 [2024-11-06 15:43:19.226070] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:52.212 [2024-11-06 15:43:19.226160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.212 [2024-11-06 15:43:19.354348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:52.212 [2024-11-06 15:43:19.460015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.212 [2024-11-06 15:43:19.460058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.212 [2024-11-06 15:43:19.460068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.212 [2024-11-06 15:43:19.460077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.212 [2024-11-06 15:43:19.460085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.212 [2024-11-06 15:43:19.462278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.212 [2024-11-06 15:43:19.462345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.212 [2024-11-06 15:43:19.462367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.470 [2024-11-06 15:43:20.088090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.470 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.729 Malloc0 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.729 [2024-11-06 15:43:20.212674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:52.729 { 00:38:52.729 "params": { 00:38:52.729 "name": "Nvme$subsystem", 00:38:52.729 "trtype": "$TEST_TRANSPORT", 00:38:52.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.729 "adrfam": "ipv4", 00:38:52.729 "trsvcid": "$NVMF_PORT", 00:38:52.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.729 "hdgst": ${hdgst:-false}, 00:38:52.729 "ddgst": ${ddgst:-false} 00:38:52.729 }, 00:38:52.729 "method": "bdev_nvme_attach_controller" 00:38:52.729 } 00:38:52.729 EOF 00:38:52.729 )") 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:52.729 15:43:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:52.729 "params": { 00:38:52.729 "name": "Nvme1", 00:38:52.729 "trtype": "tcp", 00:38:52.729 "traddr": "10.0.0.2", 00:38:52.729 "adrfam": "ipv4", 00:38:52.729 "trsvcid": "4420", 00:38:52.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:52.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:52.729 "hdgst": false, 00:38:52.729 "ddgst": false 00:38:52.729 }, 00:38:52.729 "method": "bdev_nvme_attach_controller" 00:38:52.729 }' 00:38:52.729 [2024-11-06 15:43:20.292828] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:52.729 [2024-11-06 15:43:20.292910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102834 ] 00:38:52.987 [2024-11-06 15:43:20.418192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.987 [2024-11-06 15:43:20.535317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.553 Running I/O for 1 seconds... 00:38:54.488 9749.00 IOPS, 38.08 MiB/s 00:38:54.488 Latency(us) 00:38:54.488 [2024-11-06T14:43:22.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.488 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:54.488 Verification LBA range: start 0x0 length 0x4000 00:38:54.488 Nvme1n1 : 1.01 9794.80 38.26 0.00 0.00 13013.96 2839.89 10360.93 00:38:54.488 [2024-11-06T14:43:22.126Z] =================================================================================================================== 00:38:54.488 [2024-11-06T14:43:22.127Z] Total : 9794.80 38.26 0.00 0.00 13013.96 2839.89 10360.93 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4103190 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:55.425 { 00:38:55.425 "params": { 00:38:55.425 "name": "Nvme$subsystem", 00:38:55.425 "trtype": "$TEST_TRANSPORT", 00:38:55.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.425 "adrfam": "ipv4", 00:38:55.425 "trsvcid": "$NVMF_PORT", 00:38:55.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.425 "hdgst": ${hdgst:-false}, 00:38:55.425 "ddgst": ${ddgst:-false} 00:38:55.425 }, 00:38:55.425 "method": "bdev_nvme_attach_controller" 00:38:55.425 } 00:38:55.425 EOF 00:38:55.425 )") 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:55.425 15:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:55.425 "params": { 00:38:55.425 "name": "Nvme1", 00:38:55.425 "trtype": "tcp", 00:38:55.425 "traddr": "10.0.0.2", 00:38:55.425 "adrfam": "ipv4", 00:38:55.425 "trsvcid": "4420", 00:38:55.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:55.425 "hdgst": false, 00:38:55.425 "ddgst": false 00:38:55.425 }, 00:38:55.425 "method": "bdev_nvme_attach_controller" 00:38:55.425 }' 00:38:55.425 [2024-11-06 15:43:22.910406] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:38:55.425 [2024-11-06 15:43:22.910495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103190 ] 00:38:55.426 [2024-11-06 15:43:23.035373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.684 [2024-11-06 15:43:23.150552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.420 Running I/O for 15 seconds... 00:38:58.380 9520.00 IOPS, 37.19 MiB/s [2024-11-06T14:43:26.018Z] 9646.50 IOPS, 37.68 MiB/s [2024-11-06T14:43:26.018Z] 15:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4102588 00:38:58.380 15:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:58.380 [2024-11-06 15:43:25.874174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.380 [2024-11-06 15:43:25.874801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.380 [2024-11-06 15:43:25.874812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.874987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.874999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.381 [2024-11-06 15:43:25.875657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.381 [2024-11-06 15:43:25.875668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.875981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.875990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:58.382 [2024-11-06 15:43:25.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.382 [2024-11-06 15:43:25.876512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.382 [2024-11-06 15:43:25.876523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.876980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.877000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:58.383 [2024-11-06 15:43:25.877020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:38:58.383 [2024-11-06 15:43:25.877044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:58.383 [2024-11-06 15:43:25.877053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:58.383 [2024-11-06 15:43:25.877068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:38:58.383 [2024-11-06 15:43:25.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:58.383 [2024-11-06 15:43:25.877493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:58.383 [2024-11-06 15:43:25.877515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:58.383 [2024-11-06 15:43:25.877541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:58.383 [2024-11-06 15:43:25.877562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:58.383 [2024-11-06 15:43:25.877570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.383 [2024-11-06 15:43:25.880583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.383 [2024-11-06 15:43:25.880623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.383 [2024-11-06 15:43:25.881364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.383 [2024-11-06 15:43:25.881389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.383 [2024-11-06 15:43:25.881402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.383 [2024-11-06 15:43:25.881651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.383 [2024-11-06 15:43:25.881881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.383 [2024-11-06 15:43:25.881894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.383 [2024-11-06 15:43:25.881906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.383 [2024-11-06 15:43:25.881920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.383 [2024-11-06 15:43:25.894933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.383 [2024-11-06 15:43:25.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.383 [2024-11-06 15:43:25.895338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.383 [2024-11-06 15:43:25.895350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.383 [2024-11-06 15:43:25.895583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.383 [2024-11-06 15:43:25.895801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.383 [2024-11-06 15:43:25.895813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.383 [2024-11-06 15:43:25.895822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.383 [2024-11-06 15:43:25.895832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.383 [2024-11-06 15:43:25.908905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.383 [2024-11-06 15:43:25.909938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.383 [2024-11-06 15:43:25.909969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.909981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.910219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.910439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.910456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.910466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.910475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.922899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.923390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.923427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.923665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.923902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.923916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.923925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.923935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.937082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.937616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.937641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.937653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.937901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.938156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.938169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.938178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.938189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.951313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.951809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.951833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.951844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.952080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.952324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.952339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.952348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.952363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.965380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.965890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.965913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.965924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.966160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.966404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.966419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.966428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.966438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.979544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.980030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.980105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.980139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.980707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.980945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.980958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.980967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.980978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:25.993541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:25.994049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:25.994109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:25.994142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:25.994668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:25.994905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:25.994918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:25.994927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:25.994937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.384 [2024-11-06 15:43:26.007332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.384 [2024-11-06 15:43:26.007821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.384 [2024-11-06 15:43:26.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.384 [2024-11-06 15:43:26.007855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.384 [2024-11-06 15:43:26.008071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.384 [2024-11-06 15:43:26.008322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.384 [2024-11-06 15:43:26.008337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.384 [2024-11-06 15:43:26.008347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.384 [2024-11-06 15:43:26.008357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.021029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.021531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.021554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.021565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.021794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.022024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.022037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.022046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.022056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.034789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.035246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.035270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.035280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.035497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.035714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.035727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.035736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.035745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.048451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.048866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.048929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.048971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.049772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.050038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.050051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.050060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.050070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.062200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.062620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.062642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.062652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.062869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.063086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.063105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.063113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.063122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.076043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.076536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.076558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.076568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.076784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.077001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.077013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.077023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.077032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.089787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.090261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.090323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.090356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.090864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.091083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.091096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.091104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.645 [2024-11-06 15:43:26.091113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.645 [2024-11-06 15:43:26.103511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.645 [2024-11-06 15:43:26.103982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.645 [2024-11-06 15:43:26.104003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.645 [2024-11-06 15:43:26.104014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.645 [2024-11-06 15:43:26.104236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.645 [2024-11-06 15:43:26.104480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.645 [2024-11-06 15:43:26.104492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.645 [2024-11-06 15:43:26.104502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.104511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.117354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.117719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.117779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.117813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.118613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.119049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.119063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.119071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.119080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.131136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.131593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.131603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.131832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.132062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.132077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.132087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.132098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.145197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.145630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.145653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.145664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.145899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.146134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.146148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.146158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.146167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.159275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.159803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.159827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.159837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.160073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.160316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.160330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.160340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.160350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.173280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.173694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.173717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.173728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.173955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.174183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.174196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.174212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.174224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.187213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.187689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.187712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.187723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.187949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.188178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.188191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.188200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.188216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.200952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.201443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.201476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.201692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.201908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.201921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.201930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.201939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.214727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.215145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.215155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.215391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.215620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.215633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.215642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.215652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.228427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.228822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.228915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.229714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.230217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.646 [2024-11-06 15:43:26.230230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.646 [2024-11-06 15:43:26.230239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.646 [2024-11-06 15:43:26.230250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.646 [2024-11-06 15:43:26.242180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.646 [2024-11-06 15:43:26.242666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.646 [2024-11-06 15:43:26.242688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.646 [2024-11-06 15:43:26.242698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.646 [2024-11-06 15:43:26.242914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.646 [2024-11-06 15:43:26.243131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.647 [2024-11-06 15:43:26.243143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.647 [2024-11-06 15:43:26.243151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.647 [2024-11-06 15:43:26.243160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.647 [2024-11-06 15:43:26.255854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.647 [2024-11-06 15:43:26.256338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.647 [2024-11-06 15:43:26.256361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.647 [2024-11-06 15:43:26.256371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.647 [2024-11-06 15:43:26.256587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.647 [2024-11-06 15:43:26.256804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.647 [2024-11-06 15:43:26.256817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.647 [2024-11-06 15:43:26.256826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.647 [2024-11-06 15:43:26.256841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.647 [2024-11-06 15:43:26.269650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.647 [2024-11-06 15:43:26.270117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.647 [2024-11-06 15:43:26.270139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.647 [2024-11-06 15:43:26.270152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.647 [2024-11-06 15:43:26.270398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.647 [2024-11-06 15:43:26.270626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.647 [2024-11-06 15:43:26.270639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.647 [2024-11-06 15:43:26.270648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.647 [2024-11-06 15:43:26.270658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.283528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.284020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.284043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.284054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.907 [2024-11-06 15:43:26.284306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.907 [2024-11-06 15:43:26.284541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.907 [2024-11-06 15:43:26.284554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.907 [2024-11-06 15:43:26.284563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.907 [2024-11-06 15:43:26.284573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.297277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.297822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.297854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.907 [2024-11-06 15:43:26.298404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.907 [2024-11-06 15:43:26.298633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.907 [2024-11-06 15:43:26.298645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.907 [2024-11-06 15:43:26.298654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.907 [2024-11-06 15:43:26.298664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.311013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.311500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.311522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.311532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.907 [2024-11-06 15:43:26.311747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.907 [2024-11-06 15:43:26.311966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.907 [2024-11-06 15:43:26.311978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.907 [2024-11-06 15:43:26.311987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.907 [2024-11-06 15:43:26.311996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.324803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.325360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.325393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.907 [2024-11-06 15:43:26.326082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.907 [2024-11-06 15:43:26.326303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.907 [2024-11-06 15:43:26.326316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.907 [2024-11-06 15:43:26.326326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.907 [2024-11-06 15:43:26.326335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.338530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.339041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.339100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.339133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.907 [2024-11-06 15:43:26.339929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.907 [2024-11-06 15:43:26.340460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.907 [2024-11-06 15:43:26.340473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.907 [2024-11-06 15:43:26.340483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.907 [2024-11-06 15:43:26.340493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.907 [2024-11-06 15:43:26.352272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.907 [2024-11-06 15:43:26.352727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.907 [2024-11-06 15:43:26.352749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.907 [2024-11-06 15:43:26.352759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.352975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.353191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.353209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.353221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.353230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.366032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.366493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.366515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.366525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.366742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.366959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.366971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.366979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.366988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.379773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.380256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.380280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.380291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.380506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.380722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.380735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.380744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.380753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.393567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.394045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.394068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.394078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.394328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.394563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.394576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.394585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.394595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.407701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.408121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.408143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.408154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.408387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.408616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.408629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.408638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.408647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.421368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.421848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.421897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.421932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.422515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.422749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.422763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.422772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.422782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.435058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.435542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.435611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.435644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.436420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.436649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.436661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.436670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.436680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.448819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.449226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.449251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.449262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.449477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.449693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.449706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.449714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.449724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.462511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.463034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.463092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.463125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.463661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.463889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.463906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.463916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.463925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.908 [2024-11-06 15:43:26.476258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.908 [2024-11-06 15:43:26.476728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.908 [2024-11-06 15:43:26.476781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.908 [2024-11-06 15:43:26.476817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.908 [2024-11-06 15:43:26.477613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.908 [2024-11-06 15:43:26.478019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.908 [2024-11-06 15:43:26.478033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.908 [2024-11-06 15:43:26.478042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.908 [2024-11-06 15:43:26.478052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.909 [2024-11-06 15:43:26.490072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.909 [2024-11-06 15:43:26.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.909 [2024-11-06 15:43:26.490512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.909 [2024-11-06 15:43:26.490521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.909 [2024-11-06 15:43:26.490740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.909 [2024-11-06 15:43:26.490956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.909 [2024-11-06 15:43:26.490968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.909 [2024-11-06 15:43:26.490977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.909 [2024-11-06 15:43:26.490987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.909 [2024-11-06 15:43:26.503887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.909 [2024-11-06 15:43:26.504364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.909 [2024-11-06 15:43:26.504387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.909 [2024-11-06 15:43:26.504396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.909 [2024-11-06 15:43:26.504612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.909 [2024-11-06 15:43:26.504828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.909 [2024-11-06 15:43:26.504841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.909 [2024-11-06 15:43:26.504849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.909 [2024-11-06 15:43:26.504858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.909 [2024-11-06 15:43:26.517645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.909 [2024-11-06 15:43:26.518122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.909 [2024-11-06 15:43:26.518143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.909 [2024-11-06 15:43:26.518153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.909 [2024-11-06 15:43:26.518398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.909 [2024-11-06 15:43:26.518626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.909 [2024-11-06 15:43:26.518640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.909 [2024-11-06 15:43:26.518649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.909 [2024-11-06 15:43:26.518659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:58.909 [2024-11-06 15:43:26.531420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:58.909 [2024-11-06 15:43:26.531918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.909 [2024-11-06 15:43:26.531978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:58.909 [2024-11-06 15:43:26.532010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:58.909 [2024-11-06 15:43:26.532583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:58.909 [2024-11-06 15:43:26.532812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:58.909 [2024-11-06 15:43:26.532828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:58.909 [2024-11-06 15:43:26.532838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:58.909 [2024-11-06 15:43:26.532848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.545379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.545851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.545899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.545935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.546605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.546840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.546853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.546862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.169 [2024-11-06 15:43:26.546871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.562195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.562827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.562892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.562924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.563654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.563999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.564017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.564029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.169 [2024-11-06 15:43:26.564043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.576157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.576640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.576700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.576734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.577503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.578027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.578053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.578078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.169 [2024-11-06 15:43:26.578097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.592627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.593250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.593281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.593297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.593639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.593983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.594001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.594015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.169 [2024-11-06 15:43:26.594027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.606722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.607142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.607165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.607176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.607423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.607651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.607664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.607673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.169 [2024-11-06 15:43:26.607684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.169 [2024-11-06 15:43:26.620530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.169 [2024-11-06 15:43:26.620987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.169 [2024-11-06 15:43:26.621009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.169 [2024-11-06 15:43:26.621019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.169 [2024-11-06 15:43:26.621256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.169 [2024-11-06 15:43:26.621485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.169 [2024-11-06 15:43:26.621498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.169 [2024-11-06 15:43:26.621508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.621518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.634436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.634920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.634989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.635023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.635818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.636292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.636305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.636314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.636325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 7072.67 IOPS, 27.63 MiB/s [2024-11-06T14:43:26.808Z] [2024-11-06 15:43:26.648176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.648599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.648622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.648633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.648860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.649088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.649102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.649110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.649120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.662267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.662761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.662783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.662800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.663026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.663276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.663289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.663299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.663309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.676251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.676725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.676751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.676762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.676990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.677224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.677237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.677246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.677256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.690050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.690530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.690552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.690562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.690779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.690996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.691008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.691018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.691027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.703890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.704282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.704305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.704315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.704532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.704749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.704761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.704769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.704778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.717586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.718069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.718091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.718100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.718324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.718540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.718552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.718560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.718569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.731376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.731852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.731874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.731884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.732099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.732350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.732363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.732372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.732382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.745025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.745504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.745563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.745596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.746127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.746372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.170 [2024-11-06 15:43:26.746387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.170 [2024-11-06 15:43:26.746396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.170 [2024-11-06 15:43:26.746405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.170 [2024-11-06 15:43:26.758757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.170 [2024-11-06 15:43:26.759227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.170 [2024-11-06 15:43:26.759251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.170 [2024-11-06 15:43:26.759261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.170 [2024-11-06 15:43:26.759476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.170 [2024-11-06 15:43:26.759691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.171 [2024-11-06 15:43:26.759707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.171 [2024-11-06 15:43:26.759715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.171 [2024-11-06 15:43:26.759724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.171 [2024-11-06 15:43:26.772523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.171 [2024-11-06 15:43:26.773002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.171 [2024-11-06 15:43:26.773024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.171 [2024-11-06 15:43:26.773034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.171 [2024-11-06 15:43:26.773271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.171 [2024-11-06 15:43:26.773499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.171 [2024-11-06 15:43:26.773512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.171 [2024-11-06 15:43:26.773521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.171 [2024-11-06 15:43:26.773530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.171 [2024-11-06 15:43:26.786372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.171 [2024-11-06 15:43:26.786852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.171 [2024-11-06 15:43:26.786874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.171 [2024-11-06 15:43:26.786884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.171 [2024-11-06 15:43:26.787100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.171 [2024-11-06 15:43:26.787345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.171 [2024-11-06 15:43:26.787358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.171 [2024-11-06 15:43:26.787368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.171 [2024-11-06 15:43:26.787378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.171 [2024-11-06 15:43:26.800031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.171 [2024-11-06 15:43:26.800539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.171 [2024-11-06 15:43:26.800598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.171 [2024-11-06 15:43:26.800630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.171 [2024-11-06 15:43:26.801428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.171 [2024-11-06 15:43:26.801779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.171 [2024-11-06 15:43:26.801792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.171 [2024-11-06 15:43:26.801801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.171 [2024-11-06 15:43:26.801814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.813767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.814258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.814318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.814351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.814798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.815015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.815027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.815036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.815045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.827449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.827933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.827995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.828026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.828620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.828849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.828863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.828872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.828882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.841117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.841543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.841603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.841636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.842284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.842512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.842526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.842535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.842545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.854793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.855295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.855353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.855384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.855847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.856069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.856081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.856090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.856100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.868470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.868878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.868912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.869127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.869374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.869388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.869397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.869408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.882224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.882634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.882656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.882666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.882880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.883095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.883107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.883116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.883125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.895938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.896439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.896499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.896539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.897338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.897786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.897799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.897807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.897816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.909645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.910127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.910149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.910160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.432 [2024-11-06 15:43:26.910414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.432 [2024-11-06 15:43:26.910649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.432 [2024-11-06 15:43:26.910663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.432 [2024-11-06 15:43:26.910672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.432 [2024-11-06 15:43:26.910682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.432 [2024-11-06 15:43:26.923785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.432 [2024-11-06 15:43:26.924276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.432 [2024-11-06 15:43:26.924338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.432 [2024-11-06 15:43:26.924370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.924660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.924876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.924889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.924898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.924907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:26.937718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:26.938227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:26.938289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:26.938321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.938889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.939122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.939135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.939144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.939154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:26.951526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:26.951974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:26.951997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:26.952007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.952244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.952472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.952485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.952494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.952504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:26.965436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:26.965932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:26.965990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:26.966021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.966630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.966847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.966859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.966868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.966876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:26.979541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:26.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:26.980066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:26.980076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.980316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.980550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.980563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.980576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.980586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:26.993663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:26.994155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:26.994177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:26.994188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:26.994428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:26.994664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:26.994677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:26.994686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:26.994696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:27.007725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:27.008238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:27.008264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:27.008275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:27.008511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:27.008745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:27.008759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:27.008769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:27.008779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:27.021870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:27.022299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:27.022325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:27.022337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:27.022572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:27.022808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:27.022822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:27.022831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:27.022845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:27.035958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:27.036448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:27.036472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:27.036483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:27.036718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:27.036953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:27.036967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:27.036976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:27.036986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:27.049862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.433 [2024-11-06 15:43:27.050348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.433 [2024-11-06 15:43:27.050372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.433 [2024-11-06 15:43:27.050383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.433 [2024-11-06 15:43:27.050611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.433 [2024-11-06 15:43:27.050839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.433 [2024-11-06 15:43:27.050853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.433 [2024-11-06 15:43:27.050868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.433 [2024-11-06 15:43:27.050877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.433 [2024-11-06 15:43:27.063754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.434 [2024-11-06 15:43:27.064231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.434 [2024-11-06 15:43:27.064255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.434 [2024-11-06 15:43:27.064265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.434 [2024-11-06 15:43:27.064493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.434 [2024-11-06 15:43:27.064722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.434 [2024-11-06 15:43:27.064735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.434 [2024-11-06 15:43:27.064744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.434 [2024-11-06 15:43:27.064754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.694 [2024-11-06 15:43:27.077570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.694 [2024-11-06 15:43:27.078053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.694 [2024-11-06 15:43:27.078075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.694 [2024-11-06 15:43:27.078085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.694 [2024-11-06 15:43:27.078306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.694 [2024-11-06 15:43:27.078524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.694 [2024-11-06 15:43:27.078536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.694 [2024-11-06 15:43:27.078545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.694 [2024-11-06 15:43:27.078554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.694 [2024-11-06 15:43:27.091435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.694 [2024-11-06 15:43:27.091793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.694 [2024-11-06 15:43:27.091816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.694 [2024-11-06 15:43:27.091826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.694 [2024-11-06 15:43:27.092041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.694 [2024-11-06 15:43:27.092266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.694 [2024-11-06 15:43:27.092279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.694 [2024-11-06 15:43:27.092287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.694 [2024-11-06 15:43:27.092297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.694 [2024-11-06 15:43:27.105263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.694 [2024-11-06 15:43:27.105606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.694 [2024-11-06 15:43:27.105628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.694 [2024-11-06 15:43:27.105639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.694 [2024-11-06 15:43:27.105854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.694 [2024-11-06 15:43:27.106071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.106083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.106092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.106102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.119161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.119683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.119744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.119784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.120329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.120563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.120575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.120585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.120594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.132921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.133353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.133376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.133386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.133603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.133819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.133831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.133839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.133848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.146900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.147310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.147381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.147415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.148180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.148402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.148414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.148423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.148432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.160784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.161286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.161312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.161322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.161566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.161801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.161814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.161824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.161833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.174949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.175378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.175401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.175412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.175640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.175868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.175881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.175890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.175899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.188906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.189395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.189405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.189633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.189862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.189876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.189885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.189894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.202683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.203176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.203251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.203286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.203756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.203974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.203986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.203998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.204008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.216576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.217032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.217084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.217120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.217699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.217928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.217941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.217950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.217960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.230314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.230739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.230761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.695 [2024-11-06 15:43:27.230771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.695 [2024-11-06 15:43:27.230987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.695 [2024-11-06 15:43:27.231210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.695 [2024-11-06 15:43:27.231223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.695 [2024-11-06 15:43:27.231232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.695 [2024-11-06 15:43:27.231242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.695 [2024-11-06 15:43:27.244338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.695 [2024-11-06 15:43:27.244792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.695 [2024-11-06 15:43:27.244814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.244825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.245042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.245283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.245296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.245306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.245316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.258419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.258921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.258945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.258956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.259183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.259419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.259433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.259442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.259452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.272345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.272825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.272886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.272920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.273435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.273662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.273675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.273684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.273693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.286245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.286617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.286678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.286711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.287508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.287854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.287866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.287874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.287882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.299969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.300448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.300485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.300713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.300942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.300955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.300965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.300975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.313900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.314342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.314366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.314377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.314605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.314833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.314847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.314856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.314866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.696 [2024-11-06 15:43:27.327821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.696 [2024-11-06 15:43:27.328305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.696 [2024-11-06 15:43:27.328365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.696 [2024-11-06 15:43:27.328398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.696 [2024-11-06 15:43:27.328670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.696 [2024-11-06 15:43:27.328898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.696 [2024-11-06 15:43:27.328910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.696 [2024-11-06 15:43:27.328919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.696 [2024-11-06 15:43:27.328930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.341741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.342256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.342319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.342353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.342851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.343079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.343092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.343101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.957 [2024-11-06 15:43:27.343111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.355638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.356129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.356151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.356162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.356397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.356627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.356639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.356648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.957 [2024-11-06 15:43:27.356657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.369372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.369791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.369814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.369824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.370052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.370288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.370302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.370311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.957 [2024-11-06 15:43:27.370321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.383266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.383622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.383645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.383655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.383883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.384111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.384128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.384137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.957 [2024-11-06 15:43:27.384147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.397173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.397569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.397579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.397807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.398035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.398047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.398057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.957 [2024-11-06 15:43:27.398066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.957 [2024-11-06 15:43:27.411001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.957 [2024-11-06 15:43:27.411462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.957 [2024-11-06 15:43:27.411485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.957 [2024-11-06 15:43:27.411495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.957 [2024-11-06 15:43:27.411731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.957 [2024-11-06 15:43:27.411967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.957 [2024-11-06 15:43:27.411980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.957 [2024-11-06 15:43:27.411989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.412000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.425151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.425512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.425536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.425546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.425773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.426001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.426014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.426023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.426036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.439224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.439710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.439768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.439799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.440596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.441179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.441192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.441200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.441213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.453154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.453567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.453593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.453603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.453820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.454035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.454048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.454056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.454066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.466946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.467404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.467427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.467438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.467666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.467895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.467907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.467916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.467925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.480772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.481262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.481285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.481294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.481528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.481744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.481757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.481765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.481773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.494516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.495002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.495023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.495033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.495255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.495471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.495483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.495492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.495501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.508231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.508716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.508773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.508806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.509298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.509527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.509539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.509548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.509557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.522009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.522496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.522519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.522535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.522750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.522967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.522979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.522988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.522998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.535710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.536177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.536217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.536460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.536688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.536701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.536710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.958 [2024-11-06 15:43:27.536720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.958 [2024-11-06 15:43:27.549397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.958 [2024-11-06 15:43:27.549776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.958 [2024-11-06 15:43:27.549798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.958 [2024-11-06 15:43:27.549808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.958 [2024-11-06 15:43:27.550024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.958 [2024-11-06 15:43:27.550249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.958 [2024-11-06 15:43:27.550278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.958 [2024-11-06 15:43:27.550288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.959 [2024-11-06 15:43:27.550297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.959 [2024-11-06 15:43:27.563063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.959 [2024-11-06 15:43:27.563549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.959 [2024-11-06 15:43:27.563572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.959 [2024-11-06 15:43:27.563581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.959 [2024-11-06 15:43:27.563800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.959 [2024-11-06 15:43:27.564017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.959 [2024-11-06 15:43:27.564029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.959 [2024-11-06 15:43:27.564037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.959 [2024-11-06 15:43:27.564046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.959 [2024-11-06 15:43:27.576782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.959 [2024-11-06 15:43:27.577259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.959 [2024-11-06 15:43:27.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.959 [2024-11-06 15:43:27.577290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.959 [2024-11-06 15:43:27.577505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.959 [2024-11-06 15:43:27.577721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.959 [2024-11-06 15:43:27.577733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.959 [2024-11-06 15:43:27.577742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.959 [2024-11-06 15:43:27.577751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:59.959 [2024-11-06 15:43:27.590776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:59.959 [2024-11-06 15:43:27.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.959 [2024-11-06 15:43:27.591315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:38:59.959 [2024-11-06 15:43:27.591326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:38:59.959 [2024-11-06 15:43:27.591554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:38:59.959 [2024-11-06 15:43:27.591781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:59.959 [2024-11-06 15:43:27.591794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:59.959 [2024-11-06 15:43:27.591803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:59.959 [2024-11-06 15:43:27.591813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.604612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.605067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.605126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.605157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.605955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.606506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.606522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.606530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.606539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.618438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.618888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.618935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.618969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.619582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.619811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.619824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.619833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.619842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.632188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.632676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.632730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.632764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.633374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.633603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.633616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.633625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.633635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 5304.50 IOPS, 20.72 MiB/s [2024-11-06T14:43:27.857Z] [2024-11-06 15:43:27.647224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.647639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.647699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.647731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.648342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.648625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.648650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.648670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.648695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.663687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.664304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.664334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.664349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.664691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.665034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.665052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.665065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.665079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.677744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.678242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.678265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.219 [2024-11-06 15:43:27.678275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.219 [2024-11-06 15:43:27.678509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.219 [2024-11-06 15:43:27.678736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.219 [2024-11-06 15:43:27.678749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.219 [2024-11-06 15:43:27.678757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.219 [2024-11-06 15:43:27.678765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.219 [2024-11-06 15:43:27.691691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.219 [2024-11-06 15:43:27.692195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.219 [2024-11-06 15:43:27.692266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.692297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.692817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.693045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.693058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.693068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.693078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.705572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.706043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.706065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.706075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.706295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.706511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.706524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.706532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.706541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.719538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.719940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.719961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.719971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.720187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.720436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.720449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.720459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.720468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.733422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.733926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.733984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.734017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.734556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.734786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.734800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.734809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.734819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.747175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.747631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.747655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.747667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.747884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.748100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.748112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.748120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.748129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.761009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.761439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.761463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.761473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.761701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.761928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.761941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.761951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.761961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.774802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.775258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.775281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.775291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.775508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.775724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.775737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.775746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.775756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.788535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.789018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.789043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.789053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.789293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.789525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.789538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.789548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.789558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.802381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.802855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.802865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.803080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.803326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.803339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.803348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.803357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.816033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.816372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.220 [2024-11-06 15:43:27.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.220 [2024-11-06 15:43:27.816403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.220 [2024-11-06 15:43:27.816619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.220 [2024-11-06 15:43:27.816835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.220 [2024-11-06 15:43:27.816848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.220 [2024-11-06 15:43:27.816857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.220 [2024-11-06 15:43:27.816866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.220 [2024-11-06 15:43:27.829687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.220 [2024-11-06 15:43:27.830156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.221 [2024-11-06 15:43:27.830178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.221 [2024-11-06 15:43:27.830188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.221 [2024-11-06 15:43:27.830437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.221 [2024-11-06 15:43:27.830666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.221 [2024-11-06 15:43:27.830678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.221 [2024-11-06 15:43:27.830690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.221 [2024-11-06 15:43:27.830700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.221 [2024-11-06 15:43:27.843373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.221 [2024-11-06 15:43:27.843832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.221 [2024-11-06 15:43:27.843854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.221 [2024-11-06 15:43:27.843864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.221 [2024-11-06 15:43:27.844080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.221 [2024-11-06 15:43:27.844322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.221 [2024-11-06 15:43:27.844341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.221 [2024-11-06 15:43:27.844351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.221 [2024-11-06 15:43:27.844360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.481 [2024-11-06 15:43:27.857275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.481 [2024-11-06 15:43:27.857681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.481 [2024-11-06 15:43:27.857743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.481 [2024-11-06 15:43:27.857776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.481 [2024-11-06 15:43:27.858572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.481 [2024-11-06 15:43:27.859177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.481 [2024-11-06 15:43:27.859189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.481 [2024-11-06 15:43:27.859198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.481 [2024-11-06 15:43:27.859213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.481 [2024-11-06 15:43:27.871157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.481 [2024-11-06 15:43:27.871659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.481 [2024-11-06 15:43:27.871721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.481 [2024-11-06 15:43:27.871753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.481 [2024-11-06 15:43:27.872268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.481 [2024-11-06 15:43:27.872485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.481 [2024-11-06 15:43:27.872496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.481 [2024-11-06 15:43:27.872505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.481 [2024-11-06 15:43:27.872514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.481 [2024-11-06 15:43:27.884931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.481 [2024-11-06 15:43:27.885401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.481 [2024-11-06 15:43:27.885423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.481 [2024-11-06 15:43:27.885432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.481 [2024-11-06 15:43:27.885647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.481 [2024-11-06 15:43:27.885862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.481 [2024-11-06 15:43:27.885874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.481 [2024-11-06 15:43:27.885883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.481 [2024-11-06 15:43:27.885893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.481 [2024-11-06 15:43:27.898691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.481 [2024-11-06 15:43:27.899155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.481 [2024-11-06 15:43:27.899229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.481 [2024-11-06 15:43:27.899266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.900046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.900444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.900458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.900467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.900477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.912529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.913008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.913029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.913039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.913279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.913508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.913521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.913530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.913539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.926420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.926831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.926857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.926868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.927095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.927349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.927363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.927372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.927382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.940540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.940978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.941002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.941012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.941246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.941475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.941487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.941496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.941506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.954528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.954946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.954969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.954979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.955213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.955442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.955457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.955465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.955475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.968363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.968800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.968822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.968833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.969066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.969305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.969321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.969333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.969345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.982227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.982725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.982784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.982816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.983398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.983616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.983628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.983638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.983647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:27.995999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:27.996402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:27.996425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:27.996435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:27.996650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:27.996867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:27.996879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:27.996888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:27.996897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:28.009780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:28.010266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:28.010290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:28.010301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:28.010517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:28.010738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.482 [2024-11-06 15:43:28.010751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.482 [2024-11-06 15:43:28.010760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.482 [2024-11-06 15:43:28.010770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.482 [2024-11-06 15:43:28.023561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.482 [2024-11-06 15:43:28.024030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.482 [2024-11-06 15:43:28.024091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.482 [2024-11-06 15:43:28.024124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.482 [2024-11-06 15:43:28.024598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.482 [2024-11-06 15:43:28.024827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.024839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.024849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.024859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.037272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.037657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.037718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.037752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.038371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.038600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.038613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.038622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.038638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.051020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.051438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.051471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.051686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.051903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.051915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.051926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.051936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.064727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.065212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.065234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.065245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.065461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.065677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.065690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.065698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.065707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.078406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.078886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.078908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.078918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.079134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.079379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.079392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.079401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.079411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.092165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.092648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.092670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.092680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.092896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.093112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.093125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.093134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.093143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.483 [2024-11-06 15:43:28.105930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.483 [2024-11-06 15:43:28.106391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.483 [2024-11-06 15:43:28.106453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.483 [2024-11-06 15:43:28.106486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.483 [2024-11-06 15:43:28.107280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.483 [2024-11-06 15:43:28.107730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.483 [2024-11-06 15:43:28.107743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.483 [2024-11-06 15:43:28.107752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.483 [2024-11-06 15:43:28.107762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.119916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.120403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.120426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.120436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.120653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.120869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.120881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.120890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.120899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.133675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.133984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.134016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.134238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.134483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.134496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.134504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.134514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.147447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.147938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.147962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.147972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.148187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.148434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.148448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.148457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.148467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.161281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.161764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.161814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.161849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.162647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.163028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.163041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.163049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.163059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.174994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.175490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.175549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.175581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.176113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.176358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.176371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.176381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.176391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.188776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.189206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.189246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.189257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.189494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.189728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.189741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.189751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.189761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.202920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.203399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.203423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.203434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.203663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.203891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.203903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.203912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.203922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.216836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.217247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.217270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.217281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.217510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.217738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.744 [2024-11-06 15:43:28.217751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.744 [2024-11-06 15:43:28.217760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.744 [2024-11-06 15:43:28.217770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.744 [2024-11-06 15:43:28.230522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.744 [2024-11-06 15:43:28.231035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.744 [2024-11-06 15:43:28.231095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.744 [2024-11-06 15:43:28.231127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.744 [2024-11-06 15:43:28.231926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.744 [2024-11-06 15:43:28.232267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.232284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.232293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.232303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.244224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.244644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.244666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.244676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.244892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.245108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.245121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.245130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.245139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.257873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.258359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.258420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.258452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.259104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.259346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.259360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.259369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.259378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.271689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.272165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.272188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.272198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.272444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.272672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.272684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.272694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.272707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.285448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.285909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.285930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.285942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.286158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.286380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.286394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.286403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.286412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.299292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.299642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.299664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.299675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.299889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.300106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.300119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.300127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.300136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.313073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.313579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.313638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.313670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.314194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.314442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.314456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.314465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.314476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.327085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.327576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.327600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.327610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.327851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.328080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.328092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.328101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.328111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.340850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.341347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.341409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.341442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.342040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.342262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.342275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.342284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.342293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.354684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.355090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.355112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.355121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.745 [2024-11-06 15:43:28.355343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.745 [2024-11-06 15:43:28.355559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.745 [2024-11-06 15:43:28.355572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.745 [2024-11-06 15:43:28.355581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.745 [2024-11-06 15:43:28.355590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:00.745 [2024-11-06 15:43:28.368481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:00.745 [2024-11-06 15:43:28.368956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.745 [2024-11-06 15:43:28.369006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:00.745 [2024-11-06 15:43:28.369049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:00.746 [2024-11-06 15:43:28.369843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:00.746 [2024-11-06 15:43:28.370355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:00.746 [2024-11-06 15:43:28.370368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:00.746 [2024-11-06 15:43:28.370378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:00.746 [2024-11-06 15:43:28.370387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.382333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.382810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.382869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.382902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.383445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.383675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.006 [2024-11-06 15:43:28.383688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.006 [2024-11-06 15:43:28.383697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.006 [2024-11-06 15:43:28.383707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.396023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.396486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.396508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.396518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.396735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.396952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.006 [2024-11-06 15:43:28.396964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.006 [2024-11-06 15:43:28.396972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.006 [2024-11-06 15:43:28.396981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.409767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.410178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.410199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.410215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.410455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.410686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.006 [2024-11-06 15:43:28.410699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.006 [2024-11-06 15:43:28.410708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.006 [2024-11-06 15:43:28.410718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.423466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.423946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.423968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.423978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.424194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.424416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.006 [2024-11-06 15:43:28.424429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.006 [2024-11-06 15:43:28.424438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.006 [2024-11-06 15:43:28.424447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.437281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.437798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.437823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.437841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.438068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.438302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.006 [2024-11-06 15:43:28.438316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.006 [2024-11-06 15:43:28.438325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.006 [2024-11-06 15:43:28.438334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.006 [2024-11-06 15:43:28.451159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.006 [2024-11-06 15:43:28.451665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.006 [2024-11-06 15:43:28.451688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.006 [2024-11-06 15:43:28.451699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.006 [2024-11-06 15:43:28.451926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.006 [2024-11-06 15:43:28.452155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.452171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.452181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.452191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.465362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.465771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.465804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.466031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.466266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.466280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.466289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.466299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.479238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.479691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.479750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.479783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.480350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.480579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.480593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.480602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.480611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.493061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.493575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.493586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.493813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.494064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.494077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.494088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.494100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.506987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.507410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.507433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.507444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.507670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.507898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.507911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.507921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.507931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.520903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.521243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.521266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.521276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.521492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.521708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.521720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.521729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.521738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.534791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.535215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.535238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.535249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.535476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.535705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.535718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.535727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.535737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.548682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.549121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.549180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.549232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.549781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.549997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.550010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.550018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.550027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.562569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.562994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.563004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.563237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.563471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.563484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.563492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.563502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.576483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.576957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.577048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.577590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.007 [2024-11-06 15:43:28.577807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.007 [2024-11-06 15:43:28.577819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.007 [2024-11-06 15:43:28.577828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.007 [2024-11-06 15:43:28.577838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.007 [2024-11-06 15:43:28.590386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.007 [2024-11-06 15:43:28.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.007 [2024-11-06 15:43:28.590819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.007 [2024-11-06 15:43:28.590833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.007 [2024-11-06 15:43:28.591061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.008 [2024-11-06 15:43:28.591295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.008 [2024-11-06 15:43:28.591309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.008 [2024-11-06 15:43:28.591318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.008 [2024-11-06 15:43:28.591327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.008 [2024-11-06 15:43:28.604357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.008 [2024-11-06 15:43:28.604753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.008 [2024-11-06 15:43:28.604776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.008 [2024-11-06 15:43:28.604785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.008 [2024-11-06 15:43:28.605001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.008 [2024-11-06 15:43:28.605223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.008 [2024-11-06 15:43:28.605236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.008 [2024-11-06 15:43:28.605245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.008 [2024-11-06 15:43:28.605254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.008 [2024-11-06 15:43:28.618286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.008 [2024-11-06 15:43:28.618639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.008 [2024-11-06 15:43:28.618662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.008 [2024-11-06 15:43:28.618672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.008 [2024-11-06 15:43:28.618887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.008 [2024-11-06 15:43:28.619103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.008 [2024-11-06 15:43:28.619116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.008 [2024-11-06 15:43:28.619125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.008 [2024-11-06 15:43:28.619134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.008 [2024-11-06 15:43:28.632050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.008 [2024-11-06 15:43:28.632493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.008 [2024-11-06 15:43:28.632517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.008 [2024-11-06 15:43:28.632528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.008 [2024-11-06 15:43:28.632756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.008 [2024-11-06 15:43:28.632992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.008 [2024-11-06 15:43:28.633006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.008 [2024-11-06 15:43:28.633015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.008 [2024-11-06 15:43:28.633025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.268 [2024-11-06 15:43:28.647219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.268 4243.60 IOPS, 16.58 MiB/s [2024-11-06T14:43:28.906Z] [2024-11-06 15:43:28.647583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.268 [2024-11-06 15:43:28.647605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.268 [2024-11-06 15:43:28.647616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.268 [2024-11-06 15:43:28.647845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.268 [2024-11-06 15:43:28.648073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.268 [2024-11-06 15:43:28.648086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.268 [2024-11-06 15:43:28.648095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.268 [2024-11-06 15:43:28.648105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.268 [2024-11-06 15:43:28.661134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.268 [2024-11-06 15:43:28.661526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.268 [2024-11-06 15:43:28.661548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.268 [2024-11-06 15:43:28.661558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.268 [2024-11-06 15:43:28.661779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.268 [2024-11-06 15:43:28.661995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.268 [2024-11-06 15:43:28.662007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.268 [2024-11-06 15:43:28.662017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.268 [2024-11-06 15:43:28.662028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.268 [2024-11-06 15:43:28.674929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.268 [2024-11-06 15:43:28.675339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.268 [2024-11-06 15:43:28.675363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.268 [2024-11-06 15:43:28.675374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.268 [2024-11-06 15:43:28.675602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.268 [2024-11-06 15:43:28.675831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.268 [2024-11-06 15:43:28.675843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.268 [2024-11-06 15:43:28.675858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.268 [2024-11-06 15:43:28.675868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.268 [2024-11-06 15:43:28.688727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.268 [2024-11-06 15:43:28.689135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.268 [2024-11-06 15:43:28.689194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.268 [2024-11-06 15:43:28.689243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.690024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.690381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.690394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.690403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.690413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.702560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.702968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.702990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.703000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.703234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.703464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.703476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.703486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.703496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.716617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.717031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.717090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.717124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.717922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.718380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.718394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.718403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.718413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.730576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.730923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.730946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.730957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.731184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.731419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.731432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.731441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.731451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.744413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.744874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.744933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.744967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.745761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.746264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.746278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.746287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.746298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.758170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.758549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.758572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.758582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.758810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.759038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.759051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.759060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.759069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.772022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.772420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.772445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.772455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.772671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.772886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.772900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.772909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.772918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.785963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.786512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.786545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.787126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.787348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.787361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.787369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.787378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.799853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.800269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.800292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.800302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.800518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.800735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.800747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.800755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.800764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.813715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.814120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.269 [2024-11-06 15:43:28.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.269 [2024-11-06 15:43:28.814224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.269 [2024-11-06 15:43:28.815014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.269 [2024-11-06 15:43:28.815462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.269 [2024-11-06 15:43:28.815475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.269 [2024-11-06 15:43:28.815483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.269 [2024-11-06 15:43:28.815493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.269 [2024-11-06 15:43:28.827625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.269 [2024-11-06 15:43:28.827955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.827977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.827987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 [2024-11-06 15:43:28.828209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.828425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.828439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.828454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.828463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.270 [2024-11-06 15:43:28.841494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.270 [2024-11-06 15:43:28.841954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.842013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.842046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 [2024-11-06 15:43:28.842558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.842786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.842799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.842808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.842818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4102588 Killed "${NVMF_APP[@]}" "$@" 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4104204 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4104204 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 4104204 ']' 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.270 [2024-11-06 15:43:28.855623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:01.270 [2024-11-06 15:43:28.856051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.856075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.856086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 15:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.270 [2024-11-06 15:43:28.856327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.856562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.856575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.856585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.856594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.270 [2024-11-06 15:43:28.869702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.270 [2024-11-06 15:43:28.870186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.870216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.870227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 [2024-11-06 15:43:28.870461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.870696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.870709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.870719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.870729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.270 [2024-11-06 15:43:28.883879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.270 [2024-11-06 15:43:28.884290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.884326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 [2024-11-06 15:43:28.884566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.884802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.884815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.884825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.884834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.270 [2024-11-06 15:43:28.897858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.270 [2024-11-06 15:43:28.898253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.270 [2024-11-06 15:43:28.898278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.270 [2024-11-06 15:43:28.898290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.270 [2024-11-06 15:43:28.898528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.270 [2024-11-06 15:43:28.898764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.270 [2024-11-06 15:43:28.898778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.270 [2024-11-06 15:43:28.898789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.270 [2024-11-06 15:43:28.898800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.531 [2024-11-06 15:43:28.911939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.531 [2024-11-06 15:43:28.912545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.531 [2024-11-06 15:43:28.912573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.531 [2024-11-06 15:43:28.912584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.531 [2024-11-06 15:43:28.912819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.531 [2024-11-06 15:43:28.913050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.531 [2024-11-06 15:43:28.913063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.531 [2024-11-06 15:43:28.913073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.531 [2024-11-06 15:43:28.913083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.531 [2024-11-06 15:43:28.926091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.531 [2024-11-06 15:43:28.926588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.926613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.926624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.926856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.927089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.927107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.927116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.927126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:28.934646] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:01.532 [2024-11-06 15:43:28.934722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.532 [2024-11-06 15:43:28.940219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:28.940683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.940707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.940719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.940957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.941195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.941215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.941225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.941235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:28.954229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:28.954723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.954748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.954761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.955000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.955246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.955261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.955272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.955283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:28.968231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:28.968752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.968775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.968788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.969026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.969275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.969289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.969299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.969310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:28.982286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:28.982686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.982709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.982720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.982958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.983197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.983216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.983227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.983238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:28.996274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:28.996757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:28.996782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:28.996794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:28.997027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:28.997267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:28.997280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:28.997290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:28.997300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:29.010404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:29.010814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:29.010839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:29.010851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:29.011090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:29.011334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:29.011349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:29.011362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:29.011373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:29.024492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:29.024904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.532 [2024-11-06 15:43:29.024928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.532 [2024-11-06 15:43:29.024940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.532 [2024-11-06 15:43:29.025170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.532 [2024-11-06 15:43:29.025408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.532 [2024-11-06 15:43:29.025421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.532 [2024-11-06 15:43:29.025431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.532 [2024-11-06 15:43:29.025441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.532 [2024-11-06 15:43:29.038511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.532 [2024-11-06 15:43:29.039011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.039034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.039045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.039299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.039536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.039550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.039560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.039570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.052516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.052936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.052960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.052970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.053214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.053451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.053465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.053475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.053485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.066613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.067118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.067141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.067152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.067409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.067648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.067661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.067670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.067680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.073349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:01.533 [2024-11-06 15:43:29.080636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.081133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.081157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.081167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.081405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.081636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.081649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.081659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.081669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.094720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.095159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.095187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.095198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.095458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.095696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.095709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.095719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.095729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.108743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.109229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.109254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.109265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.109497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.109729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.109742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.109752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.109762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.122729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.123243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.123278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.123510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.123741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.123754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.123764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.123774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.136616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.137040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.137064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.533 [2024-11-06 15:43:29.137076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.533 [2024-11-06 15:43:29.137318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.533 [2024-11-06 15:43:29.137550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.533 [2024-11-06 15:43:29.137563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.533 [2024-11-06 15:43:29.137573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.533 [2024-11-06 15:43:29.137583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.533 [2024-11-06 15:43:29.150607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.533 [2024-11-06 15:43:29.151104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.533 [2024-11-06 15:43:29.151127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.534 [2024-11-06 15:43:29.151138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.534 [2024-11-06 15:43:29.151378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.534 [2024-11-06 15:43:29.151607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.534 [2024-11-06 15:43:29.151620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.534 [2024-11-06 15:43:29.151630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.534 [2024-11-06 15:43:29.151639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.534 [2024-11-06 15:43:29.164669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.534 [2024-11-06 15:43:29.165175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.534 [2024-11-06 15:43:29.165199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.534 [2024-11-06 15:43:29.165218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.534 [2024-11-06 15:43:29.165455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.534 [2024-11-06 15:43:29.165691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.534 [2024-11-06 15:43:29.165704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.534 [2024-11-06 15:43:29.165714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.534 [2024-11-06 15:43:29.165724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.794 [2024-11-06 15:43:29.178601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.794 [2024-11-06 15:43:29.179099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.794 [2024-11-06 15:43:29.179122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.794 [2024-11-06 15:43:29.179132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.794 [2024-11-06 15:43:29.179370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.794 [2024-11-06 15:43:29.179600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.794 [2024-11-06 15:43:29.179613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.794 [2024-11-06 15:43:29.179623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.794 [2024-11-06 15:43:29.179633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.794 [2024-11-06 15:43:29.184149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.794 [2024-11-06 15:43:29.184189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.794 [2024-11-06 15:43:29.184200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.794 [2024-11-06 15:43:29.184215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.794 [2024-11-06 15:43:29.184224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.794 [2024-11-06 15:43:29.186322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.794 [2024-11-06 15:43:29.186398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.794 [2024-11-06 15:43:29.186420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.794 [2024-11-06 15:43:29.192770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.794 [2024-11-06 15:43:29.193291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.794 [2024-11-06 15:43:29.193317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.794 [2024-11-06 15:43:29.193328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.794 [2024-11-06 15:43:29.193573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.794 [2024-11-06 15:43:29.193804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.794 [2024-11-06 15:43:29.193817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.794 [2024-11-06 15:43:29.193827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.794 [2024-11-06 15:43:29.193838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.794 [2024-11-06 15:43:29.206791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.794 [2024-11-06 15:43:29.207238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.794 [2024-11-06 15:43:29.207264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.207276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.207515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.207755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.207768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.207778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.207789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.220948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.221456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.221480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.221490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.221728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.221966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.221979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.221988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.221999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.234950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.235475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.235503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.235514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.235751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.235989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.236002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.236011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.236021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.249147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.249576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.249600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.249611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.249847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.250085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.250099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.250109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.250120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.263243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.263754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.263778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.263789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.264027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.264270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.264283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.264293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.264304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.277257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.277772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.277797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.277813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.278052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.278297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.278311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.278321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.278332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.291266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.291771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.291794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.291805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.292041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.292287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.292301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.292310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.292320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.305469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.305971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.305995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.306006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.795 [2024-11-06 15:43:29.306249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.795 [2024-11-06 15:43:29.306488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.795 [2024-11-06 15:43:29.306502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.795 [2024-11-06 15:43:29.306511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.795 [2024-11-06 15:43:29.306522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.795 [2024-11-06 15:43:29.319642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.795 [2024-11-06 15:43:29.320150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.795 [2024-11-06 15:43:29.320172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.795 [2024-11-06 15:43:29.320183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.320424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.320669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.320682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.320691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.320701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.333850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.334332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.334356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.334367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.334604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.334841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.334854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.334863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.334873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.347963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.348452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.348476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.348486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.348721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.348958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.348971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.348981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.348990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.362101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.362605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.362629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.362640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.362876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.363114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.363128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.363141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.363152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.376256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.376784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.377019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.377261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.377276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.377286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.377296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.390377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.390889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.390913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.390925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.391161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.391405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.391420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.391430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.391441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.404547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.405060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.405085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.405096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.405345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.405582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.405596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.405606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.405616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:01.796 [2024-11-06 15:43:29.418589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:01.796 [2024-11-06 15:43:29.419074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.796 [2024-11-06 15:43:29.419098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:01.796 [2024-11-06 15:43:29.419110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:01.796 [2024-11-06 15:43:29.419354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:01.796 [2024-11-06 15:43:29.419593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:01.796 [2024-11-06 15:43:29.419608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:01.796 [2024-11-06 15:43:29.419617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:01.796 [2024-11-06 15:43:29.419627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.432812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.433319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.433344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.433355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.433597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.057 [2024-11-06 15:43:29.433836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.057 [2024-11-06 15:43:29.433850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.057 [2024-11-06 15:43:29.433859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.057 [2024-11-06 15:43:29.433869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.446992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.447495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.447518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.447530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.447767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.057 [2024-11-06 15:43:29.448005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.057 [2024-11-06 15:43:29.448018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.057 [2024-11-06 15:43:29.448028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.057 [2024-11-06 15:43:29.448038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.461159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.461673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.461700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.461711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.461949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.057 [2024-11-06 15:43:29.462186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.057 [2024-11-06 15:43:29.462199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.057 [2024-11-06 15:43:29.462214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.057 [2024-11-06 15:43:29.462225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.475322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.475803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.475826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.475836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.476073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.057 [2024-11-06 15:43:29.476314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.057 [2024-11-06 15:43:29.476328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.057 [2024-11-06 15:43:29.476337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.057 [2024-11-06 15:43:29.476347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.489429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.489932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.489955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.489965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.490207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.057 [2024-11-06 15:43:29.490444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.057 [2024-11-06 15:43:29.490457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.057 [2024-11-06 15:43:29.490467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.057 [2024-11-06 15:43:29.490477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.057 [2024-11-06 15:43:29.503565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.057 [2024-11-06 15:43:29.504077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.057 [2024-11-06 15:43:29.504100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.057 [2024-11-06 15:43:29.504110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.057 [2024-11-06 15:43:29.504355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.504592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.504605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.504615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.504624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.517721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.518148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.518170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.518180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.518423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.518660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.518674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.518683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.518693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.531803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.532307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.532331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.532342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.532576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.532811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.532824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.532832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.532842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.545933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.546419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.546444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.546454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.546689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.546931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.546944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.546954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.546973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.560064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.560567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.560590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.560601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.560836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.561072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.561085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.561095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.561105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.574211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.574627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.574650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.574660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.574896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.575133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.575146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.575155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.575165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.588259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.588758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.588780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.588791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.589028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.589270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.589284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.589298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.589309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.602382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.602884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.602907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.602918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.603154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.603396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.058 [2024-11-06 15:43:29.603410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.058 [2024-11-06 15:43:29.603419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.058 [2024-11-06 15:43:29.603429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.058 [2024-11-06 15:43:29.616522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.058 [2024-11-06 15:43:29.616954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.058 [2024-11-06 15:43:29.616977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.058 [2024-11-06 15:43:29.616988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.058 [2024-11-06 15:43:29.617229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.058 [2024-11-06 15:43:29.617465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.617478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.617488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.617498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.059 [2024-11-06 15:43:29.630577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.059 [2024-11-06 15:43:29.631013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.059 [2024-11-06 15:43:29.631036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.059 [2024-11-06 15:43:29.631046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.059 [2024-11-06 15:43:29.631289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.059 [2024-11-06 15:43:29.631526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.631561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.631570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.631581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.059 [2024-11-06 15:43:29.644692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.059 [2024-11-06 15:43:29.645171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.059 [2024-11-06 15:43:29.645194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.059 [2024-11-06 15:43:29.645210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.059 [2024-11-06 15:43:29.645451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.059 [2024-11-06 15:43:29.645688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.645701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.645711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.645721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.059 3536.33 IOPS, 13.81 MiB/s [2024-11-06T14:43:29.697Z] [2024-11-06 15:43:29.658837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.059 [2024-11-06 15:43:29.659340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.059 [2024-11-06 15:43:29.659365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.059 [2024-11-06 15:43:29.659377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.059 [2024-11-06 15:43:29.659612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.059 [2024-11-06 15:43:29.659848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.659863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.659873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.659884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.059 [2024-11-06 15:43:29.672955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.059 [2024-11-06 15:43:29.673455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.059 [2024-11-06 15:43:29.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.059 [2024-11-06 15:43:29.673490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.059 [2024-11-06 15:43:29.673726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.059 [2024-11-06 15:43:29.673962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.673975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.673984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.673994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.059 [2024-11-06 15:43:29.687073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.059 [2024-11-06 15:43:29.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.059 [2024-11-06 15:43:29.687608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.059 [2024-11-06 15:43:29.687620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.059 [2024-11-06 15:43:29.687856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.059 [2024-11-06 15:43:29.688093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.059 [2024-11-06 15:43:29.688108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.059 [2024-11-06 15:43:29.688118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.059 [2024-11-06 15:43:29.688128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.319 [2024-11-06 15:43:29.701211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.319 [2024-11-06 15:43:29.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.319 [2024-11-06 15:43:29.701742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.319 [2024-11-06 15:43:29.701753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.319 [2024-11-06 15:43:29.701989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.319 [2024-11-06 15:43:29.702231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.319 [2024-11-06 15:43:29.702245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.319 [2024-11-06 15:43:29.702255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.319 [2024-11-06 15:43:29.702265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.319 [2024-11-06 15:43:29.715349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.319 [2024-11-06 15:43:29.715786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.319 [2024-11-06 15:43:29.715809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.319 [2024-11-06 15:43:29.715819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.319 [2024-11-06 15:43:29.716053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.319 [2024-11-06 15:43:29.716293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.319 [2024-11-06 15:43:29.716307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.319 [2024-11-06 15:43:29.716319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.319 [2024-11-06 15:43:29.716328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.319 [2024-11-06 15:43:29.729414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.319 [2024-11-06 15:43:29.729864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.319 [2024-11-06 15:43:29.729888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.319 [2024-11-06 15:43:29.729899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.319 [2024-11-06 15:43:29.730139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.319 [2024-11-06 15:43:29.730382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.319 [2024-11-06 15:43:29.730396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.319 [2024-11-06 15:43:29.730405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.319 [2024-11-06 15:43:29.730416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.319 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:02.319 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:39:02.319 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:02.319 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:02.319 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.319 [2024-11-06 15:43:29.743476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.319 [2024-11-06 15:43:29.743978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.319 [2024-11-06 15:43:29.744002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.319 [2024-11-06 15:43:29.744014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.319 [2024-11-06 15:43:29.744259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.319 [2024-11-06 15:43:29.744497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.319 [2024-11-06 15:43:29.744511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.319 [2024-11-06 15:43:29.744521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.319 [2024-11-06 15:43:29.744531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.319 [2024-11-06 15:43:29.757637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.319 [2024-11-06 15:43:29.758135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.319 [2024-11-06 15:43:29.758159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.319 [2024-11-06 15:43:29.758170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.319 [2024-11-06 15:43:29.758412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.758647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.758661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.758671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.758682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.771780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.772206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.772230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.772245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.772482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.772718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.772732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.772742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.772752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.320 [2024-11-06 15:43:29.785185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.320 [2024-11-06 15:43:29.785848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.786276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.786300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.786311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.786546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.786781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.786794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.786804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.786814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.320 [2024-11-06 15:43:29.799901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.800360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.800385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.800396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.800632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.800868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.800881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.800894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.800904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.813988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.814406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.814441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.814677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.814915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.814928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.814938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.814948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.828051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.828568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.828592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.828603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.828841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.829079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.829093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.829103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.829123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.842058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.842575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.842599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.842610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.842846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.843084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.843098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.843108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.843118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.856229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.856731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.856754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.856765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.857000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.857242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.857257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.857267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.857276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.870376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.870883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.870906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.870917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.871153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.871395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.871409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.871418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.871427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 [2024-11-06 15:43:29.884515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 [2024-11-06 15:43:29.885037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.885062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.885073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.885316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.885552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.885566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.885575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.885585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 Malloc0 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.320 [2024-11-06 15:43:29.898701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.320 [2024-11-06 15:43:29.899189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:02.320 [2024-11-06 15:43:29.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=10.0.0.2, port=4420 00:39:02.320 [2024-11-06 15:43:29.899229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:39:02.320 [2024-11-06 15:43:29.899464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:39:02.320 [2024-11-06 15:43:29.899700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:39:02.320 [2024-11-06 15:43:29.899714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:39:02.320 [2024-11-06 15:43:29.899723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:39:02.320 [2024-11-06 15:43:29.899733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:02.320 [2024-11-06 15:43:29.909558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.320 [2024-11-06 15:43:29.912848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.320 15:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4103190 00:39:02.320 [2024-11-06 15:43:29.937116] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:39:04.190 3986.14 IOPS, 15.57 MiB/s [2024-11-06T14:43:32.765Z] 4719.12 IOPS, 18.43 MiB/s [2024-11-06T14:43:33.700Z] 5278.22 IOPS, 20.62 MiB/s [2024-11-06T14:43:35.076Z] 5732.70 IOPS, 22.39 MiB/s [2024-11-06T14:43:36.013Z] 6092.09 IOPS, 23.80 MiB/s [2024-11-06T14:43:36.948Z] 6402.17 IOPS, 25.01 MiB/s [2024-11-06T14:43:37.884Z] 6673.00 IOPS, 26.07 MiB/s [2024-11-06T14:43:38.819Z] 6896.29 IOPS, 26.94 MiB/s 00:39:11.181 Latency(us) 00:39:11.181 [2024-11-06T14:43:38.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:11.181 Verification LBA range: start 0x0 length 0x4000 00:39:11.181 Nvme1n1 : 15.01 7081.99 27.66 9850.00 0.00 7534.37 546.13 43191.34 00:39:11.181 [2024-11-06T14:43:38.819Z] =================================================================================================================== 00:39:11.181 [2024-11-06T14:43:38.819Z] Total : 7081.99 27.66 9850.00 0.00 7534.37 546.13 43191.34 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:12.117 rmmod nvme_tcp 00:39:12.117 rmmod nvme_fabrics 00:39:12.117 rmmod nvme_keyring 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4104204 ']' 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4104204 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 4104204 ']' 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 4104204 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4104204 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4104204' 00:39:12.117 killing process with pid 4104204 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 4104204 00:39:12.117 15:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 4104204 00:39:13.494 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:13.494 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.494 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.495 15:43:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.028 15:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:16.028 00:39:16.028 real 0m30.029s 00:39:16.028 user 1m13.591s 00:39:16.028 sys 0m7.121s 00:39:16.028 15:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:16.028 15:43:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:16.029 ************************************ 00:39:16.029 END TEST nvmf_bdevperf 00:39:16.029 ************************************ 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.029 ************************************ 00:39:16.029 START TEST nvmf_target_disconnect 00:39:16.029 ************************************ 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:16.029 * Looking for test storage... 00:39:16.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:16.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.029 --rc genhtml_branch_coverage=1 00:39:16.029 --rc genhtml_function_coverage=1 00:39:16.029 --rc genhtml_legend=1 00:39:16.029 --rc geninfo_all_blocks=1 00:39:16.029 --rc geninfo_unexecuted_blocks=1 00:39:16.029 00:39:16.029 ' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:16.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.029 --rc genhtml_branch_coverage=1 00:39:16.029 --rc genhtml_function_coverage=1 00:39:16.029 --rc genhtml_legend=1 00:39:16.029 --rc geninfo_all_blocks=1 00:39:16.029 --rc geninfo_unexecuted_blocks=1 00:39:16.029 00:39:16.029 ' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:16.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.029 --rc genhtml_branch_coverage=1 00:39:16.029 --rc genhtml_function_coverage=1 00:39:16.029 --rc genhtml_legend=1 00:39:16.029 --rc geninfo_all_blocks=1 00:39:16.029 --rc geninfo_unexecuted_blocks=1 00:39:16.029 00:39:16.029 ' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:16.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.029 --rc genhtml_branch_coverage=1 00:39:16.029 --rc genhtml_function_coverage=1 00:39:16.029 --rc genhtml_legend=1 00:39:16.029 --rc geninfo_all_blocks=1 00:39:16.029 --rc geninfo_unexecuted_blocks=1 00:39:16.029 00:39:16.029 ' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.029 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.030 15:43:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:21.364 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:21.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:21.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:21.365 Found net devices under 0000:86:00.0: cvl_0_0 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:21.365 Found net devices under 0000:86:00.1: cvl_0_1 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:21.365 15:43:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:21.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:21.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:39:21.625 00:39:21.625 --- 10.0.0.2 ping statistics --- 00:39:21.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.625 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:21.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:21.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:39:21.625 00:39:21.625 --- 10.0.0.1 ping statistics --- 00:39:21.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.625 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:21.625 ************************************ 00:39:21.625 START TEST nvmf_target_disconnect_tc1 00:39:21.625 ************************************ 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:21.625 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:21.884 [2024-11-06 15:43:49.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:21.884 [2024-11-06 15:43:49.426590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=10.0.0.2, port=4420 00:39:21.884 [2024-11-06 15:43:49.426651] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:21.884 [2024-11-06 15:43:49.426667] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:21.884 [2024-11-06 15:43:49.426679] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:39:21.884 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:21.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:21.884 Initializing NVMe Controllers 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:21.884 00:39:21.884 real 0m0.207s 00:39:21.884 user 0m0.087s 00:39:21.884 sys 0m0.119s 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:21.884 ************************************ 00:39:21.884 END TEST nvmf_target_disconnect_tc1 00:39:21.884 ************************************ 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:21.884 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:22.145 ************************************ 00:39:22.146 START TEST nvmf_target_disconnect_tc2 00:39:22.146 ************************************ 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4109638 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4109638 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4109638 ']' 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:22.146 15:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.146 [2024-11-06 15:43:49.615951] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:22.147 [2024-11-06 15:43:49.616041] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.147 [2024-11-06 15:43:49.747148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:22.408 [2024-11-06 15:43:49.857264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.408 [2024-11-06 15:43:49.857311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.408 [2024-11-06 15:43:49.857321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.408 [2024-11-06 15:43:49.857331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.408 [2024-11-06 15:43:49.857339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.408 [2024-11-06 15:43:49.859756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:22.408 [2024-11-06 15:43:49.859835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:22.408 [2024-11-06 15:43:49.859902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:22.408 [2024-11-06 15:43:49.859924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 Malloc0 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 [2024-11-06 15:43:50.539756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 [2024-11-06 15:43:50.568057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4109724 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:22.974 15:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:25.533 15:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4109638 00:39:25.533 15:43:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 [2024-11-06 15:43:52.606468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 [2024-11-06 15:43:52.606827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 [2024-11-06 15:43:52.607190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Write completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.533 Read completed with error (sct=0, sc=8) 00:39:25.533 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Write completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 Read completed with error (sct=0, sc=8) 00:39:25.534 starting I/O failed 00:39:25.534 [2024-11-06 15:43:52.607566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:25.534 [2024-11-06 15:43:52.607868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.607895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.608149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.608373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.608472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.608685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.608776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.608998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.609014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.609222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.609239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.609398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.609413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.609597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.609869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.609913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.610128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.610174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.610356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.610401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.610675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.610996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.611049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.611337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.611380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.611545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.611583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.611771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.611808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.612013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.612059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.612351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.612391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.612608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.612789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.612827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.613042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.613080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.613230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.613268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.613401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.613657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.613905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.613942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.614090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.614365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.614404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.534 qpair failed and we were unable to recover it. 00:39:25.534 [2024-11-06 15:43:52.614680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.534 [2024-11-06 15:43:52.614724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.615041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.615098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.615362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.615401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.615536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.615573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.615725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.615770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.616062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.616116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.616313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.616354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.616482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.616520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.616670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.616713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.616935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.617221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.617267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.617532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.617577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.617746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.617792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.618014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.618058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.618286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.618339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.618576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.618779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.618816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.619077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.619119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.619418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.619607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.619642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.619939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.620234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.620447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.620718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.620754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.620957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.620996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.621223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.621269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.621486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.621527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.621722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.621761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.622036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.622075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.622284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.622325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.622538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.622578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.622891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.622930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.623183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.623233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.623394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.623433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.623665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.623705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.624000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.624052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.624329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.624370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.624563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.624602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.535 qpair failed and we were unable to recover it. 00:39:25.535 [2024-11-06 15:43:52.624876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.535 [2024-11-06 15:43:52.624917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.625213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.625254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.625466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.625505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.625785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.625825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.626137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.626180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.626438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.626482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.626698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.626741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.627049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.627104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.627399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.627440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.627645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.627685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.627842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.627881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.628129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.628167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.628392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.628434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.628584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.628622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.628897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.628941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.629138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.629182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.629537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.629694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.629733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.629882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.629922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.630200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.630505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.630549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.630828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.630872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.631095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.631137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.631368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.631413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.631642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.631686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.631922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.631964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.632132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.632363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.632415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.632665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.632709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.632951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.632994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.633190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.633246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.633528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.633570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.633842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.633886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.634182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.634236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.634429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.634472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.634678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.634720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.635008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.635053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.635347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.635392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.536 qpair failed and we were unable to recover it. 00:39:25.536 [2024-11-06 15:43:52.635604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.536 [2024-11-06 15:43:52.635647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.636029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.636302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.636347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.636550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.636594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.636810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.636854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.637136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.637178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.637507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.637552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.637843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.638179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.638232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.638445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.638487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.638787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.638838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.639054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.639097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.639331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.639376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.639651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.639694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.639955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.639999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.640278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.640322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.640600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.640981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.641025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.641320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.641365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.641660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.641704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.641919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.641962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.642109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.642152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.642445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.642491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.642767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.642829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.643110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.643153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.643359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.643407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.643626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.643669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.643974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.644017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.644332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.644378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.644659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.644701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.644983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.645027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.645341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.645387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.537 qpair failed and we were unable to recover it. 00:39:25.537 [2024-11-06 15:43:52.645587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.537 [2024-11-06 15:43:52.645629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.645839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.645883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.646043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.646088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.646370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.646676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.646719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.646943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.646987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.647215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.647259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.647476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.647519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.647656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.647698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.647958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.648001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.648156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.648199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.648384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.648429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.648710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.648954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.649283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.649330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.649492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.649534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.649806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.649849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.650159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.650211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.650457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.650500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.650710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.650753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.650984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.651027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.651329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.651373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.651584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.651628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.651935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.651979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.652245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.652295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.652507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.652550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.652782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.652828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.653129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.653171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.653479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.653524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.653808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.653852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.654130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.654173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.654469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.654513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.654735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.654779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.655098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.655140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.655445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.655490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.655780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.655824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.656105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.656147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.656406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.656451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.538 qpair failed and we were unable to recover it. 00:39:25.538 [2024-11-06 15:43:52.656754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.538 [2024-11-06 15:43:52.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.657077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.657119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.657380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.657425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.657703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.657747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.658038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.658081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.658322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.658366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.658664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.658708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.658966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.659008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.659293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.659338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.659629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.659673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.659955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.659998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.660278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.660639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.660684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.660972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.661015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.661242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.661287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.661596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.661640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.661870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.661913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.662141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.662253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.662547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.662591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.662877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.662919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.663157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.663209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.663409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.663452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.663760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.663803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.664070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.664112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.664335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.664382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.664581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.664623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.664763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.664813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.665083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.665128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.665427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.665701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.665744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.666036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.666080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.666365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.666410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.666706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.666749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.667037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.667082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.667310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.667354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.667647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.667690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.667991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.668036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.668329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.539 [2024-11-06 15:43:52.668375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.539 qpair failed and we were unable to recover it. 00:39:25.539 [2024-11-06 15:43:52.668583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.668938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.668981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.669238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.669284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.669563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.669606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.669892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.669936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.670227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.670271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.670501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.670545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.670785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.670827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.671121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.671163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.671375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.671420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.671631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.671675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.671985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.672028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.672337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.672383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.672670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.672714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.672963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.673007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.673298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.673343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.673588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.673631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.673916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.673959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.674249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.674295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.674621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.674903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.674945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.675249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.675295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.675594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.675639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.675937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.675980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.676181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.676236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.676438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.676483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.676811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.676854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.677155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.677199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.677522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.677785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.677829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.678116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.678167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.678386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.678431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.678740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.678784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.679069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.679111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.679269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.679315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.679541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.679584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.679796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.679839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.680155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.540 [2024-11-06 15:43:52.680198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.540 qpair failed and we were unable to recover it. 00:39:25.540 [2024-11-06 15:43:52.680510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.680555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.680685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.680727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.681019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.681062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.681274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.681318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.681580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.681624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.681900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.681958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.682185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.682527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.682570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.682871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.682916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.683127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.683169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.683499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.683543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.683830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.683874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.684166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.684221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.684518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.684561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.684780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.684822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.685100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.685144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.685324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.685388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.685617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.685662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.685969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.686012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.686303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.686348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.686650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.686695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.686961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.687225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.687269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.687548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.687592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.687886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.687929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.688225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.688271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.688568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.688611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.688868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.689149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.689194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.689493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.689536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.689815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.689865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.690146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.690558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.690841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.690883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.691099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.691143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.691422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.691467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.691735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.691779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.692048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.541 [2024-11-06 15:43:52.692092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.541 qpair failed and we were unable to recover it. 00:39:25.541 [2024-11-06 15:43:52.692412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.692458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.692681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.692724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.692952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.692996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.693285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.693336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.693630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.693674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.694001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.694045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.694267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.694314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.694577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.694621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.694880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.694922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.695228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.695274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.695490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.695534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.695807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.695850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.696117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.696161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.696382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.696427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.696722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.696765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.697074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.697119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.697411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.697456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.697743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.697786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.698096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.698141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.698422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.698467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.698724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.698767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.699071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.699409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.699455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.699757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.699800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.700099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.700142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.700464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.700508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.700785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.700829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.701120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.701163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.701383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.701428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.701567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.701610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.701954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.702000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.702224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.702519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.702567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.702809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.542 [2024-11-06 15:43:52.702853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.542 qpair failed and we were unable to recover it. 00:39:25.542 [2024-11-06 15:43:52.703147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.703189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.703545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.703587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.703887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.703931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.704218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.704264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.704504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.704547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.704747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.704789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.705069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.705111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.705374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.705419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.705716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.706064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.706106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.706392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.706436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.706727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.706773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.707011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.707360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.707406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.707643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.707687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.707985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.708028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.708330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.708375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.708715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.708760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.708994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.709036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.709262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.709616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.709661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.710000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.710043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.710282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.710327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.710579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.710875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.711211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.711257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.711551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.711594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.711884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.711927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.712142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.712186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.712439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.712483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.712706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.712749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.713023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.713067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.713390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.713438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.713703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.713746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.714018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.714062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.714354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.714401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.714697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.714741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.543 [2024-11-06 15:43:52.715044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.543 [2024-11-06 15:43:52.715087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.543 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.715393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.715447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.715715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.715758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.715980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.716024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.716342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.716388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.716691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.716733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.717051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.717398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.717445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.717665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.717708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.717977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.718020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.718315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.718362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.718691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.718734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.719021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.719065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.719359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.719405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.719602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.719645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.719973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.720018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.720332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.720379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.720675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.720718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.721004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.721047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.721364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.721411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.721637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.721680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.721991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.722034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.722330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.722375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.722684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.722730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.723075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.723374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.723439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.723735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.723778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.724079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.724122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.724441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.724487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.724785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.724828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.725111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.725154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.725476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.725524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.725776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.725819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.726036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.726078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.726247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.726293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.726502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.726547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.726860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.727170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.727223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.727560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.544 [2024-11-06 15:43:52.727603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.544 qpair failed and we were unable to recover it. 00:39:25.544 [2024-11-06 15:43:52.727844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.727887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.728172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.728226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.728396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.728446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.729063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.729408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.729455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.729752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.729796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.730098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.730142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.730456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.730744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.730787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.731088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.731132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.731426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.731471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.731779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.731824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.732117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.732160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.732471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.732517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.732816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.732859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.733178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.733247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.733572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.733615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.733847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.733891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.734215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.734260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.734562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.734606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.734902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.734945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.735256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.735302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.735583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.735626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.735889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.735934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.736226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.736271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.736522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.736566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.737243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.737289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.737607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.737652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.737953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.737996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.738294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.738341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.738656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.738700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.738977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.739021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.739242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.739287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.739485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.739529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.739826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.739870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.740145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.545 [2024-11-06 15:43:52.740189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.545 qpair failed and we were unable to recover it. 00:39:25.545 [2024-11-06 15:43:52.740494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.740538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.740828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.740870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.741112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.741158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.741506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.741801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.741850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.742150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.742194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.742511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.742557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.742850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.742894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.743213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.743259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.743484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.743528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.743783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.743827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.744118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.744181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.744494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.744788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.744832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.745176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.745234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.745532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.745575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.745869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.745913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.746182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.746240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.746551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.746596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.746921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.746964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.747185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.747240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.747495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.747808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.747852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.748147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.748191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.748423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.748466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.748622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.748665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.748915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.748960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.749280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.749325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.749618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.749662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.750312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.750358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.750670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.750715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.751016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.751363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.751408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.546 qpair failed and we were unable to recover it. 00:39:25.546 [2024-11-06 15:43:52.751716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.546 [2024-11-06 15:43:52.751760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.752066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.752109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.752399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.752445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.752741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.752784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.753020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.753065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.753366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.753413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.753718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.753762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.754033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.754077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.754253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.754299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.754597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.754640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.754954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.755004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.755200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.755255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.755471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.755515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.755747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.755792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.756080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.756124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.756414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.756459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.756771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.756817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.757156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.757474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.757519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.757739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.757784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.758008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.758052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.758274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.758319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.758549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.758594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.758862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.758904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.759222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.759268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.759485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.759529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.759822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.759865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.760062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.760105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.760408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.760455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.760725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.760770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.761126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.761377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.761424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.761647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.761692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.761925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.761968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.762285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.762331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.762603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.762647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.762969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.763013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.763237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.763284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.547 qpair failed and we were unable to recover it. 00:39:25.547 [2024-11-06 15:43:52.763589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.547 [2024-11-06 15:43:52.763632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.763955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.763998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.764330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.764377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.764672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.764729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.764996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.765040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.765336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.765382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.765622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.765667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.765964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.766007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.766310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.766356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.766557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.766600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.766828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.766872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.767118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.767162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.767407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.767458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.767749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.767792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.768096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.768141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.768459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.768504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.768801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.768845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.769143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.769187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.769504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.769549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.769774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.769817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.770132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.770176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.770487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.770531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.770839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.771197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.771254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.771558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.771603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.771873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.771917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.772247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.772516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.772560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.772862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.772907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.773221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.773267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.773573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.773616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.773840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.773884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.774219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.774262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.774489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.774534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.774831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.774875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.775163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.775221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.775453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.775497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.775715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.775758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.775928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.548 [2024-11-06 15:43:52.775973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.548 qpair failed and we were unable to recover it. 00:39:25.548 [2024-11-06 15:43:52.776264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.776312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.776607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.776651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.776869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.776912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.777227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.777274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.777503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.777547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.777849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.777894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.778109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.778151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.778531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.778758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.778803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.779116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.779160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.779474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.779519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.779755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.779801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.780110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.780154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.780469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.780520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.780852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.781181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.781239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.781465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.781813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.781858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.782172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.782227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.782533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.782578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.782893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.782939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.783232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.783277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.783440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.783485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.783802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.783847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.784174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.784235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.784523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.784566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.784794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.784838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.785103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.785148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.785409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.785798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.785843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.786053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.786096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.786402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.786448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.786684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.786728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.787034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.787077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.787351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.787397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.787721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.787767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.788092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.788135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.549 qpair failed and we were unable to recover it. 00:39:25.549 [2024-11-06 15:43:52.788357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.549 [2024-11-06 15:43:52.788401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.788644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.788690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.788987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.789031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.789303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.789373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.789648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.789693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.789952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.790335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.790634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.790970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.791014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.791349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.791395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.791710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.791755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.791906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.791951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.792246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.792291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.792521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.792566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.792795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.792840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.793110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.793153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.793438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.793490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.793789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.793834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.794150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.794193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.794505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.794550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.794781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.794824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.795137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.795180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.795424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.795469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.795739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.795784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.796039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.796082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.796309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.796646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.796691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.796990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.797033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.797369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.797415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.797711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.797753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.798059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.798104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.798382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.798427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.798745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.798788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.799079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.799122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.799387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.799434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.799619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.799661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.799859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.799901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.800192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.800261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.550 [2024-11-06 15:43:52.800587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.550 [2024-11-06 15:43:52.800631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.550 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.800794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.800837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.801132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.801177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.801527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.801572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.801857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.801901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.802231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.802278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.802426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.802470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.802790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.802833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.803138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.803184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.803422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.803466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.803721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.803765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.804038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.804083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.804351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.804667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.804709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.804951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.804995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.805293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.805338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.805649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.805694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.805923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.805981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.806280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.806325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.806650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.806695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.807044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.807214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.807259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.807542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.807585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.807907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.807953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.808284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.808330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.808626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.808670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.808883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.808928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.809265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.809561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.809606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.809944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.809988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.810260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.810305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.810599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.810643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.810955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.811001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.811218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.811263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.811545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.551 [2024-11-06 15:43:52.811589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.551 qpair failed and we were unable to recover it. 00:39:25.551 [2024-11-06 15:43:52.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.811832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.812048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.812093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.812397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.812442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.812601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.812645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.812846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.813177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.813247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.813570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.813615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.813927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.813970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.814268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.814314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.814537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.814582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.814853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.815142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.815186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.815423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.815468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.815749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.815793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.816008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.816052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.816359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.816683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.816726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.816952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.816996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.817223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.817269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.817471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.817514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.817811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.817854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.818086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.818131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.818460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.818505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.818798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.818841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.819055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.819101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.819370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.819418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.819642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.819963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.820272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.820320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.820571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.820616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.820924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.820968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.821261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.821306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.821576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.821619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.821955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.822270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.822316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.822542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.822585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.822814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.822859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.823164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.823232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.552 qpair failed and we were unable to recover it. 00:39:25.552 [2024-11-06 15:43:52.823528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.552 [2024-11-06 15:43:52.823572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.823850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.823895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.824145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.824188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.824436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.824481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.824778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.824824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.825120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.825163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.825471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.825517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.825858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.825908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.826144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.826200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.826497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.826542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.826827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.826872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.827155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.827199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.827483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.827534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.827833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.827877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.828089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.828133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.828297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.828342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.828663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.828880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.828924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.829238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.829283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.829637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.829930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.829974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.830270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.830316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.830499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.830544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.830885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.830930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.831166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.831234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.831461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.831505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.831736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.831781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.832102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.832403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.832449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.832737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.832780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.832993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.833230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.833507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.833550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.833707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.833969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.834014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.834304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.834350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.834659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.834703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.834971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.553 [2024-11-06 15:43:52.835015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.553 qpair failed and we were unable to recover it. 00:39:25.553 [2024-11-06 15:43:52.835259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.835305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.835605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.835649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.835927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.835971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.836102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.836144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.836434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.836478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.836712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.836758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.837059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.837107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.837331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.837377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.837738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.838018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.838064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.838352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.838397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.838569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.838615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.838841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.838885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.839158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.839220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.839382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.839433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.839669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.839713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.840022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.840067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.840304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.840352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.840570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.840614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.840849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.840892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.841194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.841254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.841558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.841604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.841865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.841910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.842065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.842109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.842331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.842377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.842654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.842814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.842857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.843057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.843100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.843398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.843443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.843735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.843780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.844099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.844383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.844428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.844727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.844771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.845096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.845144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.845461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.845519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.845843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.845887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.846051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.846095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.846319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.846365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.554 qpair failed and we were unable to recover it. 00:39:25.554 [2024-11-06 15:43:52.846603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.554 [2024-11-06 15:43:52.846649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.846796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.846840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.846981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.847024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.847265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.847314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.847584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.847627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.847918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.847962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.848237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.848283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.848587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.848631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.848887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.848930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.849228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.849275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.849517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.849560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.849857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.849901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.850234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.850280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.850444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.850488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.850649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.850692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.851021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.851066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.851347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.851400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.851651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.851694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.852024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.852069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.852347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.852394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.852609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.852652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.852928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.852972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.853172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.853226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.853467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.853511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.853677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.853722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.854019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.854062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.854333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.854379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.854688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.854734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.855068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.855112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.855341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.855386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.855702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.856069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.856113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.856383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.856428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.856719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.856764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.857049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.555 [2024-11-06 15:43:52.857094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.555 qpair failed and we were unable to recover it. 00:39:25.555 [2024-11-06 15:43:52.857372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.857418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.857701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.857745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.858077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.858123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.858435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.858766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.859111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.859451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.859503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.859716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.859763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.860074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.860125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.860373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.860419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.860702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.860747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.861048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.861336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.861382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.861704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.861746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.862014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.862058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.862280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.862325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.862620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.862667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.862865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.862909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.863108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.863152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.863389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.863435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.863786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.864001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.864052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.864372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.864420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.864573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.864617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.864789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.864835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.865033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.865091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.865315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.865361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.865582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.865625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.865909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.865953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.866228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.866275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.866495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.866539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.867697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.867765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.868038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.868088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.868396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.868443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.868721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.868767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.868989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.869034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.869363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.869591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.556 [2024-11-06 15:43:52.869635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.556 qpair failed and we were unable to recover it. 00:39:25.556 [2024-11-06 15:43:52.869783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.869826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.870095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.870138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.870376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.870423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.870572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.870616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.870765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.870809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.871101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.871147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.871427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.871474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.871756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.871801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.872045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.872087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.872292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.872339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.872523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.872570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.872809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.872852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.873145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.873188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.873503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.873548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.873885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.873929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.874199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.874255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.874554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.874598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.874820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.874863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.875085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.875129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.875414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.875459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.875618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.875661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.875877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.875919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.876160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.876218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.876517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.876567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.876783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.876827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.877145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.877187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.877443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.877488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.877707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.877749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.877927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.877972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.878245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.878291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.878444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.878488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.878651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.878695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.878905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.878948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.879229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.879275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.879488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.879534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.879666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.879709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.879863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.879914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.557 [2024-11-06 15:43:52.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.557 qpair failed and we were unable to recover it. 00:39:25.557 [2024-11-06 15:43:52.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.880462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.880661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.880705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.880931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.880977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.881212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.881259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.881503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.881548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.881702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.881970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.882251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.882296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.882626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.882673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.882893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.882938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.883153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.883198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.883447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.883492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.883820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.883870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.884094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.884151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.884514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.884561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.884777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.884824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.885111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.885156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.885480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.885525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.885823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.885868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.886137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.886179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.886423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.886469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.886707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.886752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.887043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.887086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.887301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.887348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.887540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.887585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.887801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.887850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.888117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.888162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.888380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.888425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.888656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.888701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.888920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.888964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.889247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.889292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.889498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.889544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.889705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.889749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.890029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.890073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.890284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.890329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.890567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.890612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.890778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.890823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.891094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.558 [2024-11-06 15:43:52.891137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.558 qpair failed and we were unable to recover it. 00:39:25.558 [2024-11-06 15:43:52.891314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.891361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.891579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.891851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.891897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.892119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.892162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.892481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.892575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.892785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.892837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.893176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.893431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.893477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.893641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.893687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.893867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.893913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.894218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.894264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.894481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.894527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.894683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.894731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.894966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.895308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.895518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.895578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.895806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.895852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.896161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.896221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.896444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.896494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.896738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.896787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.897020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.897068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.897413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.897465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.897633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.897681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.897982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.898029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.898332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.898382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.898618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.898665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.899009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.899058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.899228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.899285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.899519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.899569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.899864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.899907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.900139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.900183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.900411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.900456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.900616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.900660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.900884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.900929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.901173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.901292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.901524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.901567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.901776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.901818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.559 qpair failed and we were unable to recover it. 00:39:25.559 [2024-11-06 15:43:52.901958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.559 [2024-11-06 15:43:52.902001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.902245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.902293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.902586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.902629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.902808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.902853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.903080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.903125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.903297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.903345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.903563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.903607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.903814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.903858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.904066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.904111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.904276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.904323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.904544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.904590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.904761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.904805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.905050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.905098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.905325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.905372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.905522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.905565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.905710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.905753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.905999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.906041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.906257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.906303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.906590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.906636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.906767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.906813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.907049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.907091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.907315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.907383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.907606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.907650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.907927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.907971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.908212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.908257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.908492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.908538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.908837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.908893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.909172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.909229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.909374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.909418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.909646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.909692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.909902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.909954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.910231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.910278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.910497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.910542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.910691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.910738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.560 qpair failed and we were unable to recover it. 00:39:25.560 [2024-11-06 15:43:52.911003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.560 [2024-11-06 15:43:52.911047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.911361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.911651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.911695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.911929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.912185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.912241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.912534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.912578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.912724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.912767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.912992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.913040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.913250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.913296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.913512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.913556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.913830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.913876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.914114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.914159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.914334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.914380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.914541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.914585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.914802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.914846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.915061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.915306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.915354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.915552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.915597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.915811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.915854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.916067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.916278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.916325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.916592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.916638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.916782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.916825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.917124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.917170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.917343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.917388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.917631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.917830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.917872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.918086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.918132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.918351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.918395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.918526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.918836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.918880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.919044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.919088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.919269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.919314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.919549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.919596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.919812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.919855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.920099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.920144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.920372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.920425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.920575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.920619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.561 [2024-11-06 15:43:52.920911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.561 [2024-11-06 15:43:52.920954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.561 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.921165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.921221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.921541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.921585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.921876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.921928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.922126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.922170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.922464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.922511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.922711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.922761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.923026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.923072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.923223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.923271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.923511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.923785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.923829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.923964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.924009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.924284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.924330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.924640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.924685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.924891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.924933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.925097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.925141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.925437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.925482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.925699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.925746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.925895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.925949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.926167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.926223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.926364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.926408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.926630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.926674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.926851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.927065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.927118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.927357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.927401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.927523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.927566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.927723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.927764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.927971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.928014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.928138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.928180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.928348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.928392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.928638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.928991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.929035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.929258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.929303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.929439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.929483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.929632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.929676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.929798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.929843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.929998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.930041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.930237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.930283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.930403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.562 [2024-11-06 15:43:52.930453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.562 qpair failed and we were unable to recover it. 00:39:25.562 [2024-11-06 15:43:52.930735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.930778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.930924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.930967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.931167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.931223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.931517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.931560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.931843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.931885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.932075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.932273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.932318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.932634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.932677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.932892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.933101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.933416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.933461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.933605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.933649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.933920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.933964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.934100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.934143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.934448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.934494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.934712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.934757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.934968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.935012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.935246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.935291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.935441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.935486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.935700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.935743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.935977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.936022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.936333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.936379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.936596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.936640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.936825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.937046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.937289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.937334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.937618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.937882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.937927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.938217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.938262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.938502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.938664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.938709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.938919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.938963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.939233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.939276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.939485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.939529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.939718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.939929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.939974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.940226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.940272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.940541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.563 [2024-11-06 15:43:52.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.563 qpair failed and we were unable to recover it. 00:39:25.563 [2024-11-06 15:43:52.940715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.940758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.940958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.941009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.941230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.941275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.941543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.941587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.941860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.941906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.942129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.942185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.942465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.942509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.942697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.942905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.942948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.943164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.943218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.943452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.943497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.943633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.943676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.943811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.943854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.944098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.944304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.944349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.944517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.944561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.944823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.944867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.944998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.945043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.945181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.945252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.945451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.945495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.945703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.945747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.945874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.945917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.946183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.946239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.946438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.946480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.946680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.946721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.946853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.946896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.947214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.947259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.947401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.947444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.947714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.947758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.948000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.948173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.948231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.948451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.948495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.948625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.948668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.948906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.564 [2024-11-06 15:43:52.948949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.564 qpair failed and we were unable to recover it. 00:39:25.564 [2024-11-06 15:43:52.949140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.949183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.949465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.949655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.949697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.949819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.949859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.950003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.950046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.950189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.950481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.950808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.950850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.951034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.951286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.951544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.951587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.951740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.951783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.952009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.952053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.952221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.952270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.952495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.952537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.952793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.952835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.953068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.953111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.953285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.953584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.953626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.953817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.953866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.954150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.954193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.954421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.954465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.954724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.954766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.954994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.955036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.955193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.955252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.955381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.955424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.955621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.955663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.955852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.955977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.956018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.956167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.956222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.956471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.956634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.956677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.956880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.956921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.957111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.957153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.957333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.957403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.957658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.957714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.957949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.957991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.958153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.565 [2024-11-06 15:43:52.958194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.565 qpair failed and we were unable to recover it. 00:39:25.565 [2024-11-06 15:43:52.958469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.958513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.958638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.958680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.958829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.958872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.959067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.959107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.959309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.959354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.959616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.959825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.959867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.959998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.960040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.960180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.960233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.960423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.960467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.960732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.960775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.960984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.961026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.961238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.961282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.961468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.961511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.961641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.961683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.961939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.962168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.962220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.962438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.962480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.962666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.962706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.962965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.963008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.963292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.963338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.963472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.963513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.963814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.963857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.964126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.964168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.964324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.964368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.964520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.964563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.964855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.964897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.965109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.965152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.965364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.965409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.965719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.965906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.965948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.966218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.966263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.966425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.966671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.966713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.966919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.966963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.967092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.967133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.967360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.967411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.967680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.566 [2024-11-06 15:43:52.967722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.566 qpair failed and we were unable to recover it. 00:39:25.566 [2024-11-06 15:43:52.968056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.968099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.968317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.968364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.968499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.968541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.968750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.968792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.969001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.969044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.969245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.969289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.969554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.969597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.969876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.969918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.970228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.970273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.970549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.970593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.970881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.970924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.971065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.971107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.971393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.971575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.971617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.971881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.971923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.972047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.972090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.972373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.972417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.972627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.972669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.972812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.972855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.973072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.973114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.973319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.973364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.973627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.973670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.973964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.974009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.974144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.974278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.974459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.974502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.974778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.974821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.975095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.975137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.975420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.975558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.975600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.975834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.975877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.976065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.976106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.976360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.976404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.976530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.976572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.976777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.977052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.977253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.977297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.977501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.977543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.977836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.567 [2024-11-06 15:43:52.977878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.567 qpair failed and we were unable to recover it. 00:39:25.567 [2024-11-06 15:43:52.978063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.978111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.978368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.978411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.978617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.978659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.978913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.979182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.979233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.979475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.979519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.979722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.979764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.979949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.979992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.980135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.980177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.980407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.980449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.980638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.980680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.980938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.980979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.981116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.981159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.981394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.981438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.981603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.981647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.981862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.981905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.982178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.982235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.982499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.982542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.982748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.982791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.983096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.983139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.983361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.983406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.983595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.983637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.983898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.983941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.984132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.984174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.984468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.984511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.984770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.984812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.984963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.985004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.985287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.985531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.985575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.985796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.985838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.986039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.986083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.986278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.986323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.986582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.986625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.986769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.986818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.986959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.987001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.987236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.987290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.987445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.987487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.987756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.987799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.568 [2024-11-06 15:43:52.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.568 [2024-11-06 15:43:52.988073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.568 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.988285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.988330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.988562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.988612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.988813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.988856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.988992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.989258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.989302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.989447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.989491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.989688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.989731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.989922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.989964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.990231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.990276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.990476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.990520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.990664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.990718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.990978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.991020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.991172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.991223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.991433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.991474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.991679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.991721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.991975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.992261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.992303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.992445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.992487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.992687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.992729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.992930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.992973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.993115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.993157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.993395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.993438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.993632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.993676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.993805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.993845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.994068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.994110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.994468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.994590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.994632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.994893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.995110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.995152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.995392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.995437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.995594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.995636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.995831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.995873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.996009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.569 [2024-11-06 15:43:52.996050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.569 qpair failed and we were unable to recover it. 00:39:25.569 [2024-11-06 15:43:52.996236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.996282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.996563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.996605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.996835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.996877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.997081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.997123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.997334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.997393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.997548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.997591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.997873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.997916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.998173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.998230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.998442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.998491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.998841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.999106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.999294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.999340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.999527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.999569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:52.999769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:52.999811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.000000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.000043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.000277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.000322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.000535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.000578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.000840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.000883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.001020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.001063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.001245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.001290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.001488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.001530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.001683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.001726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.001992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.002035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.002237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.002283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.002532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.002879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.003049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.003100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.003336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.003385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.003592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.003636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.003920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.003964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.004094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.004137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.004413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.004458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.004596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.004638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.004774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.004818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.005080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.005123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.005363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.005749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.005796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.006158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.570 qpair failed and we were unable to recover it. 00:39:25.570 [2024-11-06 15:43:53.006364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.570 [2024-11-06 15:43:53.006408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.006635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.006678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.006880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.006923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.007261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.007463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.007505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.007728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.007772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.007967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.008010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.008277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.008323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.008514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.008557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.008817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.008860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.009143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.009193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.009516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.009559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.009701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.009743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.009957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.010000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.010275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.010463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.010506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.010751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.010793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.010957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.011001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.011119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.011162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.011397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.011441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.011763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.012025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.012068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.012341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.012385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.012588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.012630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.012856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.012900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.013102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.013146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.013299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.013342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.013554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.013596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.013787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.013829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.014108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.014388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.014635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.014679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.014888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.014931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.015200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.015607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.015650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.015926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.015968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.016155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.016197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.016383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.016428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.571 [2024-11-06 15:43:53.016713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.571 [2024-11-06 15:43:53.016755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.571 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.016982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.017023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.017171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.017431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.017475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.017708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.017750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.017890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.017932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.018148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.018190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.018421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.018464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.018678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.018720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.018872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.019110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.019151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.019317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.019361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.019589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.019638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.019805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.020011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.020054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.020317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.020582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.020625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.020829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.020873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.021062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.021104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.021396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.021701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.021744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.022052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.022094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.022287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.022332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.022545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.022588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.022806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.022849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.023056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.023098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.023347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.023559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.023603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.023870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.024109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.024152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.024421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.024466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.024676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.024720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.024909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.024951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.025256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.025301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.025492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.025534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.025676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.025720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.025917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.025958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.026117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.026159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.572 [2024-11-06 15:43:53.026435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.572 qpair failed and we were unable to recover it. 00:39:25.572 [2024-11-06 15:43:53.026658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.026702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.026914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.026957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.027222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.027266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.027416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.027459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.027661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.027704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.027904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.027947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.028171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.028224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.028422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.028660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.028702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.029024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.029067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.029251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.029297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.029508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.029550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.029758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.029802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.030008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.030056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.030270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.030314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.030565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.030840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.030883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.031165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.031214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.031502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.031545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.031691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.031733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.031942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.031985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.032190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.032241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.032474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.032516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.032741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.032784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.033080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.033124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.033327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.033371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.033673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.033970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.034014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.034157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.034200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.034423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.034466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.034670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.034833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.034876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.035082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.035125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.035265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.035310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.035501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.035544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.573 qpair failed and we were unable to recover it. 00:39:25.573 [2024-11-06 15:43:53.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.573 [2024-11-06 15:43:53.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.036114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.036322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.036367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.036517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.036560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.036700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.036742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.036960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.037004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.037293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.037336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.037527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.037569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.037784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.037826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.037979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.038023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.038223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.038265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.038455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.038648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.038690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.039006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.039050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.039270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.039314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.039515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.039558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.039746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.039789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.040035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.040078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.040358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.040408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.040563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.040606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.040867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.040909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.041137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.041181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.041476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.041520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.041658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.041700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.041844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.041887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.042090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.042417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.042461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.042666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.042709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.042841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.042883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.043189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.043244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.043450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.043493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.043707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.043749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.044048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.044092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.044309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.044355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.044612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.044654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.044920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.044964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.045268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.045312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.574 [2024-11-06 15:43:53.045643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.574 [2024-11-06 15:43:53.045686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.574 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.045940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.046223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.046266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.046527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.046569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.046777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.046820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.047005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.047048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.047248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.047291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.047498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.047541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.047753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.047797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.048021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.048062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.048186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.048241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.048443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.048486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.048711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.048754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.049013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.049055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.049273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.049524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.049804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.049849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.050108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.050150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.050395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.050440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.050683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.050726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.050884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.050929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.051054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.051103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.051321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.051366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.051672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.051979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.052023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.052218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.052260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.052450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.052493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.052688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.052730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.052889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.052931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.053227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.053271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.053428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.053470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.053701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.053743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.054006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.054049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.054249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.054554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.054597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.054803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.054846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.054989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.055032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.055287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.055332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.055616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.055658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.055919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.055961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.575 qpair failed and we were unable to recover it. 00:39:25.575 [2024-11-06 15:43:53.056185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.575 [2024-11-06 15:43:53.056238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.056442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.056486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.056688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.056731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.056885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.056928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.057373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.057417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.057566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.057609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.057857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.058087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.058131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.058344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.058389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.058525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.058567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.058829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.058871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.059078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.059359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.059563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.059605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.059813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.059856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.060005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.060048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.060243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.060288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.060431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.060474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.060685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.060727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.060959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.061003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.061282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.061333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.061600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.061644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.061846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.061889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.062087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.062132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.062417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.062461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.062666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.062709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.062974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.063015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.063234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.063280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.063468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.063511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.063662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.063705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.063932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.064174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.064230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.064441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.064484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.064762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.064804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.064961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.065004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.065310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.065355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.065566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.065735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.065778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.066054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.066097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.066310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.066355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.066560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.576 [2024-11-06 15:43:53.066603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.576 qpair failed and we were unable to recover it. 00:39:25.576 [2024-11-06 15:43:53.066809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.066852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.067062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.067105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.067380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.067424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.067565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.067608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.067801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.067844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.068059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.068101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.068261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.068307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.068444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.068487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.068793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.068836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.069041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.069084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.069355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.069407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.069538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.069580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.069856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.069899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.070035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.070077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.070217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.070260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.070459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.070502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.070771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.070813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.071092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.071136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.071454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.071499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.071702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.071751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.071974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.072017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.072221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.072264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.072532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.072575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.072843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.072886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.073195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.073249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.073403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.073445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.073724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.073768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.073969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.074011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.074163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.074216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.074482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.074524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.074730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.074772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.075044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.075087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.075297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.075342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.075548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.075592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.075825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.075868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.076070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.076113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.076391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.076435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.076641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.076683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.076886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.076930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.077215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.077260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.077462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.577 [2024-11-06 15:43:53.077505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.577 qpair failed and we were unable to recover it. 00:39:25.577 [2024-11-06 15:43:53.077761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.077803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.078074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.078118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.078310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.078354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.078614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.078816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.078859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.079079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.079123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.079331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.079375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.079635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.079677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.079815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.079858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.080092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.080136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.080425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.080469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.080678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.080721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.080865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.080908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.081120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.081432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.081476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.081798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.082000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.082043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.082354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.082399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.082602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.082646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.082899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.082942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.083157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.083200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.083366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.083410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.083562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.083604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.083859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.083902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.084133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.084177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.084464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.084509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.084740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.084874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.084917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.085195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.085257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.085408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.085452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.085586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.085629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.085842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.085885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.086040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.086082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.086224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.086269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.086498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.086541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.086753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.086797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.087053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.087096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.578 [2024-11-06 15:43:53.087316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.578 [2024-11-06 15:43:53.087362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.578 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.087507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.087550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.087819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.087861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.088063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.088106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.088345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.088390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.088587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.088629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.088757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.088799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.089100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.089143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.089312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.089363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.089574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.089617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.089845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.089888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.090166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.090216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.090423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.090560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.090603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.090824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.090868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.091089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.091131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.091291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.091336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.091544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.091587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.091721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.091950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.091992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.092289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.092335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.092468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.092511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.092719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.092762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.092962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.093005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.093158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.093210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.093462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.093605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.093648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.579 qpair failed and we were unable to recover it. 00:39:25.579 [2024-11-06 15:43:53.093834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.579 [2024-11-06 15:43:53.093877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.094106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.094364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.094408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.094674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.094717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.094877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.094920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.095129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.095171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.095345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.095388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.095677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.095721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.096014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.096057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.096217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.096263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.096398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.096442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.096722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.096764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.096944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.097162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.097216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.097423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.097466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.097672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.097714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.097968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.098010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.098277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.098323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.098581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.098624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.098758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.098799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.099018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.099061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.099287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.099338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.099622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.099851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.099893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.100081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.100123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.100330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.100656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.100699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.100890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.101143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.101185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.101412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.101458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.101703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.101896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.101939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.102130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.102173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.102319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.102363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.102592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.102635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.102790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.102834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.103042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.103084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.103352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.580 [2024-11-06 15:43:53.103398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.580 qpair failed and we were unable to recover it. 00:39:25.580 [2024-11-06 15:43:53.103548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.103590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.103739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.103781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.104041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.104083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.104372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.104417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.104561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.104604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.104784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.105008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.105050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.105362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.105547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.105592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.105734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.105776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.105996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.106039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.106273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.106317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.106572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.106614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.106911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.107047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.107089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.107302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.107348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.107607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.107651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.107933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.107976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.108184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.108237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.108397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.108440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.108587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.108630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.108818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.108860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.108997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.109040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.109182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.109252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.109457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.109500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.109686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.109728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.109931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.109974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.110107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.110151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.110414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.110627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.110670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.110841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.110885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.111025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.111069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.111198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.111253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.111407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.111450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.581 qpair failed and we were unable to recover it. 00:39:25.581 [2024-11-06 15:43:53.111708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.581 [2024-11-06 15:43:53.111752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.111967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.112010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.112150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.112192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.112460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.112650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.112692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.112889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.112932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.113217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.113261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.113457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.113500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.113702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.113746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.113983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.114028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.114350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.114396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.114534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.114576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.114731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.114775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.115006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.115051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.115332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.115375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.115661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.115921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.115966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.116292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.116444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.116487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.116695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.116739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.116927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.116969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.117189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.117270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.117529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.117573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.117780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.118054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.118097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.118297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.118344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.118497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.118541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.118762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.118805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.119035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.119348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.119400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.119537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.119579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.119892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.120046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.120090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.120307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.120352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.120632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.120676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.120878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.120922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.121116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.582 [2024-11-06 15:43:53.121158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.582 qpair failed and we were unable to recover it. 00:39:25.582 [2024-11-06 15:43:53.121401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.121447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.121592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.121636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.121834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.121876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.122000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.122043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.122182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.122239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.122377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.122421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.122580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.122624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.122759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.122802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.123090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.123134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.123347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.123392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.123650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.123693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.123822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.123866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.124082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.124127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.124335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.124379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.124527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.124570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.124767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.124810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.125032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.125318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.125363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.125567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.125609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.125881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.126072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.126115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.126349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.126510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.126553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.126755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.126800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.127014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.127059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.127190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.127242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.127457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.127501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.127685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.127824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.127867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.128058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.128101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.128243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.583 [2024-11-06 15:43:53.128287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.583 qpair failed and we were unable to recover it. 00:39:25.583 [2024-11-06 15:43:53.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.128527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.128747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.128796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.128929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.128974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.129158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.129200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.129375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.129572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.129617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.129743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.129786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.129978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.130022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.130229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.130275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.130427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.130471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.130736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.130778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.131042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.131085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.131302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.131348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.131555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.131598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.131801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.131844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.132153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.132290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.132334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.132480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.132523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.132713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.132758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.132896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.132939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.133132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.133174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.133484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.133528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.133747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.133791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.133984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.134028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.134230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.134383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.134428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.134645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.134688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.134893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.134937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.135142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.135186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.135345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.135388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.135633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.135838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.135881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.136144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.136422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.136466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.136633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.136679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.136880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.584 [2024-11-06 15:43:53.136923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.584 qpair failed and we were unable to recover it. 00:39:25.584 [2024-11-06 15:43:53.137083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.137128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.137468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.137513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.137787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.137831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.137991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.138035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.138293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.138339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.138550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.138598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.138746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.138790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.138995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.139037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.139235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.139280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.139497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.139542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.139675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.139717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.139976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.140020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.140249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.140294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.140441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.140485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.140703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.140745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.140947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.141230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.141273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.141412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.141457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.585 [2024-11-06 15:43:53.141599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.585 [2024-11-06 15:43:53.141642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.585 qpair failed and we were unable to recover it. 00:39:25.866 [2024-11-06 15:43:53.141796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.141840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.141984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.142028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.142228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.142276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.142523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.142566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.142770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.142812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.143002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.143046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.143257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.143302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.143491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.143533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.143720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.143765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.143966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.144008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.144219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.144265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.144466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.144509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.144767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.144811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.144948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.144992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.145166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.145221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.145348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.145391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.145581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.145625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.145831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.145875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.146037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.146081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.146295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.146340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.146528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.146572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.146863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.146907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.147167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.147220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.147362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.147405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.147547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.147590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.147875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.147917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.148217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.148267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.148466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.148511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.148704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.148747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.148990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.149033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.149265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.149486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.149529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.149757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.149800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.150077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.150121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.150278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.150324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.867 [2024-11-06 15:43:53.150459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.867 [2024-11-06 15:43:53.150501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.867 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.150724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.150766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.151029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.151071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.151364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.151411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.151616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.151660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.151818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.151862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.152061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.152104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.152324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.152370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.152530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.152574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.152773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.152816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.153045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.153088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.153360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.153604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.153647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.153955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.153999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.154195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.154252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.154504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.154548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.154685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.154728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.154921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.154965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.155164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.155217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.155488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.155533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.155788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.155831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.156089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.156131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.156417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.156461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.156609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.156654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.156846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.156890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.157026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.157068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.157238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.157282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.157481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.157524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.157752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.157798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.157937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.157980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.158192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.158476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.158525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.158782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.158833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.159022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.159065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.159301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.159463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.159507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.159634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.159684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.159939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.868 [2024-11-06 15:43:53.159980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.868 qpair failed and we were unable to recover it. 00:39:25.868 [2024-11-06 15:43:53.160233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.160277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.160509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.160552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.160731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.160941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.160984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.161119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.161161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.161445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.161490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.161631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.161674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.161804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.161848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.161978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.162020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.162177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.162231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.162419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.162462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.162669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.162710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.162908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.162952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.163112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.163157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.163389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.163434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.163695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.163737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.163946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.163990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.164303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.164349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.164490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.164533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.164722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.164766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.164960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.165003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.165291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.165338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.165622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.165666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.165922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.165966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.166188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.166242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.166454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.166498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.166801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.166844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.167047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.167193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.167458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.167503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.167641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.167919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.168121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.869 [2024-11-06 15:43:53.168165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.869 qpair failed and we were unable to recover it. 00:39:25.869 [2024-11-06 15:43:53.168303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.168354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.168583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.168627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.168823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.168866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.168985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.169028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.169247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.169293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.169428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.169471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.169665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.169907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.169949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.170158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.170211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.170346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.170652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.170694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.170845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.170889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.171110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.171156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.171371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.171560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.171604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.171789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.171843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.172080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.172124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.172410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.172455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.172589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.172632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.172833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.172876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.173166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.173220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.173357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.173401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.173602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.173647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.173903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.173947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.174179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.174234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.174470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.174679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.174723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.174944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.174987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.175131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.175174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.175396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.175439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.175725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.175768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.175914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.175957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.176157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.176428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.176471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.176729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.176773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.176963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.177006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.177273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.870 [2024-11-06 15:43:53.177319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.870 qpair failed and we were unable to recover it. 00:39:25.870 [2024-11-06 15:43:53.177582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.177625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.177827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.177871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.177993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.178040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.178339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.178389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.178649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.178693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.178839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.178883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.179023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.179068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.179332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.179378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.179580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.179622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.179855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.180022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.180233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.180280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.180435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.180477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.180762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.180805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.180994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.181036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.181234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.181279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.181502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.181545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.181845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.181889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.182086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.182127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.182355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.182399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.182658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.182701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.182837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.182879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.183079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.183121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.183435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.183481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.183670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.183712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.183925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.183967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.184158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.184200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.184424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.184636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.184681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.184886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.185057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.871 [2024-11-06 15:43:53.185100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.871 qpair failed and we were unable to recover it. 00:39:25.871 [2024-11-06 15:43:53.185317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.185363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.185636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.185679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.185896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.185939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.186226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.186271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.186450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.186684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.186924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.186967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.187222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.187266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.187480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.187523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.187721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.187764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.187975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.188018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.188222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.188265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.188474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.188524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.188828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.188872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.189126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.189168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.189460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.189505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.189716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.189758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.190047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.190090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.190289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.190335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.190549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.190592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.190800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.190843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.191101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.191156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.191362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.191405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.191672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.191715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.191975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.192018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.192219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.192263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.192494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.192537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.192820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.192862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.193137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.193180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.193326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.193369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.193625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.193824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.193867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.194002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.194046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.194237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.194281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.194417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.194460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.194691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.872 [2024-11-06 15:43:53.194735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.872 qpair failed and we were unable to recover it. 00:39:25.872 [2024-11-06 15:43:53.194954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.194998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.195177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.195229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.195378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.195420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.195633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.195675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.195973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.196016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.196223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.196265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.196478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.196521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.196673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.196717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.196959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.197099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.197140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.197357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.197401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.197595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.197638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.197877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.197921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.198060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.198101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.198317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.198362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.198537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.198579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.198870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.198919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.199129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.199172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.199322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.199365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.199638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.199681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.199878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.199922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.200194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.200249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.200382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.200423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.200545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.200588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.200823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.200867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.201084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.201128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.201325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.201368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.201576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.201618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.201863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.201905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.202244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.202400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.202442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.202595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.202639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.202887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.202930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.203136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.203179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.203340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.203382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.873 [2024-11-06 15:43:53.203594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.873 [2024-11-06 15:43:53.203637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.873 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.203785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.203828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.204026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.204069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.204275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.204320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.204516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.204557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.204846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.204890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.205168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.205221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.205467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.205510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.205748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.205790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.206126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.206411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.206454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.206742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.206786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.206992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.207035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.207320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.207364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.207651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.207693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.207845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.207888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.208116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.208365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.208409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.208611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.208910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.208953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.209154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.209197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.209522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.209567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.209817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.209861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.210144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.210185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.210440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.210483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.210751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.210794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.211060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.211103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.211313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.211358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.211551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.211594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.211837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.211880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.212074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.212116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.212402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.212446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.212650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.212693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.213015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.213279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.213323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.213538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.874 [2024-11-06 15:43:53.213581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.874 qpair failed and we were unable to recover it. 00:39:25.874 [2024-11-06 15:43:53.213821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.213864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.214004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.214048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.214238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.214452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.214495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.214702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.214936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.214979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.215186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.215242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.215457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.215500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.215651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.215694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.215902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.215946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.216162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.216216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.216478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.216521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.216711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.216760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.216973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.217016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.217550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.217594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.217881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.217924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.218220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.218266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.218537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.218579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.218811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.218855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.219004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.219047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.219277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.219322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.219601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.219644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.219871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.219914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.220125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.220167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.220374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.220567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.220610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.220875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.220917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.221120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.221163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.221477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.221522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.221801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.221844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.222075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.222117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.222261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.222304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.222473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.222516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.222706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.222749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.222951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.875 [2024-11-06 15:43:53.222993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.875 qpair failed and we were unable to recover it. 00:39:25.875 [2024-11-06 15:43:53.223133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.223176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.223452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.223497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.223647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.223689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.223839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.223883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.224022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.224065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.224227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.224279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.224409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.224451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.224670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.224713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.224992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.225034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.225302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.225347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.225548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.225593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.225806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.225849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.226145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.226188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.226386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.226601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.226643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.226874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.226916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.227173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.227231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.227455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.227499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.227629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.227928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.227971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.228177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.228228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.228520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.228721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.228764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.228980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.229023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.229144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.229186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.229443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.229624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.229667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.229939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.229981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.230258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.230302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.876 qpair failed and we were unable to recover it. 00:39:25.876 [2024-11-06 15:43:53.230505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.876 [2024-11-06 15:43:53.230549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.230705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.230748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.230935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.230978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.231177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.231231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.231402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.231446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.231646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.231690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.231813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.231856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.232027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.232070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.232218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.232263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.232519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.232563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.232717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.232759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.232970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.233014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.233280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.233326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.233447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.233490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.233637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.233681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.233818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.233860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.234077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.234343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.234388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.234600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.234644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.234780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.234823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.235020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.235064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.235299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.235345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.235496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.235539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.235662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.235706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.235853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.235897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.236035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.236078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.236339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.236525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.236574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.236713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.236756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.236945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.237171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.237224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.237438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.237480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.237706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.237890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.237932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.238200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.238252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.238443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.238486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.238707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.238751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.238880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.877 [2024-11-06 15:43:53.238922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.877 qpair failed and we were unable to recover it. 00:39:25.877 [2024-11-06 15:43:53.239131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.239175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.239393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.239436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.239638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.239682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.239813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.239856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.239997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.240040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.240363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.240408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.240553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.240596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.240864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.240907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.241062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.241105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.241330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.241375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.241574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.241616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.241746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.241790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.242045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.242087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.242277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.242321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.242541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.242799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.242843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.242992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.243035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.243247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.243292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.243475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.243519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.243786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.243830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.244036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.244078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.244310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.244355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.244568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.244837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.244879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.245013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.245056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.245243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.245288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.245524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.245566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.245846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.245889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.246090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.246133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.246350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.246400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.246688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.246732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.246876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.246919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.247065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.247108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.247367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.247412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.247629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.247673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.247806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.878 [2024-11-06 15:43:53.247849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.878 qpair failed and we were unable to recover it. 00:39:25.878 [2024-11-06 15:43:53.248038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.248081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.248373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.248419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.248639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.248953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.248997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.249210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.249254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.249520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.249563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.249716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.249760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.249923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.249967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.250152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.250196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.250382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.250425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.250638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.250683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.250823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.250866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.251062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.251104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.251312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.251356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.251651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.251695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.251993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.252036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.252190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.252244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.252524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.252567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.252841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.252886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.253133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.253316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.253466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.253774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.253818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.254010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.254052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.254261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.254306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.254460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.254503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.254738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.254781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.255054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.255343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.255389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.255599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.255642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.255772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.255823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.256025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.256068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.879 [2024-11-06 15:43:53.256284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.879 [2024-11-06 15:43:53.256327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.879 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.256449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.256498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.256736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.256780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.256988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.257031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.257167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.257217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.257371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.257416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.257709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.257888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.257930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.258136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.258179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.258426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.258470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.258608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.258651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.258891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.259030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.259073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.259289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.259334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.259564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.259608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.259889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.259933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.260128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.260171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.260369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.260413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.260591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.260635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.260794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.260839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.261071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.261114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.261370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.261678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.261721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.261912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.261954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.262090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.262133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.262419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.262464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.262652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.262695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.262827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.262870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.263015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.263059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.263342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.263387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.263659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.263702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.263838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.263881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.264023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.264067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.264222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.264266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.264496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.264665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.264708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.264865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.264908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.880 [2024-11-06 15:43:53.265267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.880 qpair failed and we were unable to recover it. 00:39:25.880 [2024-11-06 15:43:53.265539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.265583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.265850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.265906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.266197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.266261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.266564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.266614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.266822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.266865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.267063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.267105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.267361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.267406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.267546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.267589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.267716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.267758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.267977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.268019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.268239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.268283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.268518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.268561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.268829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.268871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.269078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.269121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.269268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.269311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.269504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.269546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.269692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.269735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.269932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.269975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.270166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.270215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.270371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.270414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.270553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.270596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.270899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.270942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.271227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.271272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.271482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.271525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.271640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.271682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.271915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.272104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.272146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.272364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.272408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.272543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.272585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.272845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.272888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.273085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.273127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.273361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.881 [2024-11-06 15:43:53.273405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.881 qpair failed and we were unable to recover it. 00:39:25.881 [2024-11-06 15:43:53.273612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.273654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.273859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.273903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.274198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.274271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.274457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.274500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.274642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.274908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.274952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.275087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.275129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.275295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.275338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.275610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.275653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.275795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.275838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.276118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.276160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.276380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.276430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.276637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.276681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.276892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.276935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.277196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.277392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.277800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.278022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.278066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.278298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.278343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.278474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.278517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.278800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.278843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.279028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.279070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.279276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.279320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.279467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.279509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.279645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.279687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.279954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.279997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.280268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.280313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.280592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.280634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.280836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.280880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.281421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.281465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.281673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.281931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.281976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.282155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.282198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.282498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.282541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.282742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.282784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.282979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.283021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.283281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.882 [2024-11-06 15:43:53.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.882 qpair failed and we were unable to recover it. 00:39:25.882 [2024-11-06 15:43:53.283483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.283526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.283675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.283717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.283908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.283950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.284223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.284267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.284461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.284503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.284739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.284782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.284921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.284963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.285223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.285267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.285405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.285447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.285661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.285703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.285860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.285904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.286091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.286133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.286414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.286458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.286609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.286662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.286881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.286923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.287138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.287180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.287408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.287705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.287938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.288136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.288179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.288392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.288434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.288648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.288691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.288838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.288880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.289013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.289055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.289247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.289291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.289485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.289527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.289719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.290026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.290142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.290385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.290427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.290648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.290690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.290908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.290952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.291135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.291177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.291409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.291452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.883 [2024-11-06 15:43:53.291722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.883 [2024-11-06 15:43:53.291765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.883 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.291973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.292015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.292275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.292471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.292514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.292664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.292707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.292893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.292935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.293164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.293218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.293418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.293460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.293682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.293724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.293846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.293888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.294081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.294123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.294316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.294361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.294629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.294671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.294928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.295174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.295226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.295490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.295532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.295742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.295785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.295934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.295976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.296260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.296304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.296585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.296634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.296852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.296894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.297173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.297469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.297725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.297767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.297982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.298025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.298318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.298363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.298505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.298547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.298830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.298873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.299128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.299171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.299328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.299371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.299561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.299604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.299729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.299772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.299982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.300025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.300279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.300562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.300605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.300827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.300869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.301006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.301048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.301199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.884 [2024-11-06 15:43:53.301376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.884 [2024-11-06 15:43:53.301418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.884 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.301619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.301662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.301946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.301988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.302109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.302152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.302283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.302328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.302592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.302635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.302756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.302799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.302953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.302995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.303319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.303535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.303578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.303707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.303749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.303880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.303922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.304224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.304267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.304468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.304511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.304722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.304763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.305041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.305084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.305365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.305409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.305666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.305708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.305925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.305969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.306104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.306146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.306323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.306367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.306568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.306616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.306765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.306808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.307065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.307108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.307324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.307368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.307562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.307605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.307815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.307858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.308075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.308118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.308325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.308369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.308585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.308629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.308842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.308884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.309141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.309184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.309419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.309462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.309608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.309650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.885 [2024-11-06 15:43:53.309853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.885 [2024-11-06 15:43:53.309896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.885 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.310173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.310224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.310367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.310409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.310610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.310653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.310852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.310893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.311112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.311155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.311291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.311335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.311589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.311840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.311883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.312038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.312080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.312292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.312476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.312518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.312723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.312765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.312891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.312932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.313144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.313188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.313453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.313497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.313627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.313670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.313912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.314149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.314193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.314350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.314393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.314582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.314625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.314765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.314808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.315010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.315052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.315315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.315359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.315534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.315662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.315705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.315895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.315937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.316192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.316252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.316521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.316564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.316709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.316751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.316976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.886 [2024-11-06 15:43:53.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.886 qpair failed and we were unable to recover it. 00:39:25.886 [2024-11-06 15:43:53.317302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.317346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.317538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.317581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.317701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.318022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.318065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.318321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.318365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.318624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.318880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.318922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.319120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.319164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.319365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.319409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.319650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.319837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.319880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.320117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.320158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.320443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.320487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.320695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.320739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.320929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.321124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.321167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.321317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.321360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.321567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.321610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.321814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.321856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.322144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.322187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.322402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.322444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.322725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.322768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.322900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.322942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.323232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.323277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.323535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.323577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.323861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.323903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.324135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.324178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.324401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.324444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.324728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.325049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.325092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.325400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.325656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.325698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.326009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.326290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.326335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.326524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.326566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.326690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.326732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.887 [2024-11-06 15:43:53.326934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.887 [2024-11-06 15:43:53.326983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.887 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.327190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.327253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.327398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.327439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.327640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.327682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.327813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.327856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.327990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.328033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.328235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.328280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.328538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.328582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.328784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.329133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.329175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.329389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.329433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.329658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.329930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.329973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.330175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.330225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.330520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.330564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.330709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.330751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.330956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.330998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.331224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.331267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.331547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.331588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.331846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.331888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.332033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.332075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.332226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.332269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.332460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.332502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.332691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.332733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.332920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.332962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.333190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.333248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.333451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.333494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.333738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.333781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.334038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.334081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.334235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.334280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.334419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.334461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.334646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.334687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.334941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.334982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.335186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.335465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.335506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.335705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.888 [2024-11-06 15:43:53.336002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.888 [2024-11-06 15:43:53.336044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.888 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.336333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.336377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.336505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.336804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.336846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.336977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.337018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.337247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.337290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.337437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.337478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.337736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.337779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.338054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.338096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.338283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.338327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.338517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.338560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.338698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.338740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.339022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.339063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.339258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.339301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.339576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.339619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.339925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.339966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.340106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.340148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.340399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.340442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.340728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.340771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.341033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.341075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.341316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.341361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.341563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.341605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.341891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.341934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.342123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.342165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.342377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.342420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.342632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.342675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.342900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.342943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.343130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.343172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.343406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.343449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.343729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.343772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.343907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.343949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.344246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.344297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.344495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.344538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.344801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.344843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.345096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.345139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.889 [2024-11-06 15:43:53.345351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.889 [2024-11-06 15:43:53.345394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.889 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.345588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.345631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.345822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.345864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.345997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.346039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.346233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.346277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.346560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.346734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.346775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.347083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.347126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.347325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.347368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.347625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.347668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.347879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.347922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.348178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.348233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.348438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.348480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.348682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.348725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.348926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.348968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.349156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.349199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.349418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.349460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.349715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.349758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.349912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.349955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.350220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.350265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.350465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.350508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.350768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.350811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.351066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.351109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.351388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.351433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.351691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.351734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.351944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.351986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.352130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.352173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.352412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.352456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.352700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.352910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.353106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.353149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.353441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.353484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.353736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.353990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.354033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.354336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.354380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.354638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.890 [2024-11-06 15:43:53.354680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.890 qpair failed and we were unable to recover it. 00:39:25.890 [2024-11-06 15:43:53.354806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.354853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.354996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.355039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.355316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.355361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.355750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.355792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.355976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.356019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.356234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.356278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.356487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.356529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.356690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.356882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.356922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.357166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.357217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.357450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.357493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.357687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.357728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.357942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.358114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.358157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.358365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.358409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.358597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.358641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.358781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.358823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.358983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.359024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.359181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.359237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.359359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.359402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.359606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.359649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.359941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.359983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.360124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.360166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.360385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.360429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.360637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.360680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.360824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.360866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.360995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.361037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.361246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.361290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.361415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.361458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.361740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.361783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.362083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.362126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.891 qpair failed and we were unable to recover it. 00:39:25.891 [2024-11-06 15:43:53.362263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.891 [2024-11-06 15:43:53.362308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.362445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.362755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.362799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.363049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.363093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.363350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.363395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.363589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.363631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.363752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.363794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.364076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.364118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.364326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.364377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.364569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.364612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.364816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.364860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.365053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.365097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.365290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.365334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.365468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.365510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.365706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.365748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.365958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.366002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.366217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.366261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.366515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.366558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.366711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.366754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.367031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.367074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.367330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.367374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.367506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.367548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.367760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.367804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.368362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.368407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.368545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.368589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.368712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.368755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.369028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.369070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.369280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.369325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.369558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.369600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.369787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.369830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.370035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.370077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.892 qpair failed and we were unable to recover it. 00:39:25.892 [2024-11-06 15:43:53.370217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.892 [2024-11-06 15:43:53.370261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.370547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.370841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.371063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.371106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.371392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.371436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.371630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.371673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.371947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.371991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.372193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.372260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.372425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.372468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.372667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.372712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.372878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.372922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.373122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.373375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.373421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.373684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.373728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.373926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.373969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.374161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.374215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.374425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.374473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.374610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.374653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.374910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.375117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.375158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.375363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.375407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.375689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.375732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.375958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.376002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.376239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.376283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.376482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.376737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.376780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.377056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.377099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.377368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.377413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.377719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.377970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.378013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.378227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.378273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.378551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.378593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.378743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.378786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.378990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.379033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.893 [2024-11-06 15:43:53.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.893 [2024-11-06 15:43:53.379335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.893 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.379488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.379533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.379818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.379861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.380051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.380093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.380307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.380352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.380589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.380725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.380767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.380996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.381039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.381246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.381290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.381502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.381544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.381771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.381815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.382073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.382116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.382319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.382364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.382622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.382666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.382803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.382844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.383105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.383149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.383374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.383418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.383566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.383608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.383803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.383846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.384126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.384168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.384482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.384526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.384677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.384720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.384946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.385001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.385162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.385220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.385416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.385460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.385651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.385693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.385838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.385884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.386186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.386241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.386367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.386408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.386584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.386792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.386837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.386964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.387008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.387163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.387218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.387421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.387463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.387600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.387641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.387855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.387901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.388048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.388091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.894 [2024-11-06 15:43:53.388296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.894 [2024-11-06 15:43:53.388342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.894 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.388483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.388525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.388734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.388779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.389045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.389089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.389355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.389399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.389625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.389668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.389877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.389923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.390111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.390354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.390564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.390608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.390733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.390777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.390978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.391021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.391290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.391334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.391612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.391656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.391803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.391847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.392111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.392153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.392441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.392485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.392637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.392680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.392878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.392922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.393061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.393104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.393335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.393380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.393528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.393572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.393866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.393910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.394118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.394162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.394332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.394376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.394707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.394910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.394955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.395261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.395305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.395454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.395502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.395711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.395755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.395961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.396005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.396151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.396195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.396374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.396418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.396655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.396988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.397032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.397238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.397283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.397549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.895 [2024-11-06 15:43:53.397592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.895 qpair failed and we were unable to recover it. 00:39:25.895 [2024-11-06 15:43:53.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.397825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.398023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.398285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.398330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.398533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.398576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.398803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.398845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.398991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.399035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.399283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.399567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.399609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.399757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.399800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.400102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.400147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.400359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.400662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.400705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.400952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.401150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.401196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.401359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.401404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.401614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.401658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.401796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.401838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.402045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.402089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.402298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.402343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.402468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.402511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.402648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.402691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.402896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.402939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.403142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.403186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.403387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.403431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.403739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.403783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.404001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.404046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.404301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.404489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.404532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.404785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.404834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.404964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.405009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.896 qpair failed and we were unable to recover it. 00:39:25.896 [2024-11-06 15:43:53.405312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.896 [2024-11-06 15:43:53.405359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.405552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.405595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.405849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.406108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.406155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.406443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.406487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.406688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.406732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.406861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.406906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.407125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.407168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.407433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.407613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.407656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.407923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.407966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.408193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.408269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.408406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.408450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.408727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.408770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.408998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.409041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.409304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.409564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.409609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.409807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.409849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.410068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.410112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.410271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.410317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.410554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.410755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.410800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.410947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.410990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.411199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.411255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.411482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.411806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.411852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.412072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.412115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.412330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.412377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.412581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.412624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.412756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.412799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.413085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.413129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.413390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.413583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.413627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.413762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.897 [2024-11-06 15:43:53.413804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.897 qpair failed and we were unable to recover it. 00:39:25.897 [2024-11-06 15:43:53.413941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.413984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.414180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.414236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.414377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.414421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.414556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.414598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.414804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.414852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.415142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.415187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.415334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.415377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.415598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.415641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.415831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.415873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.416069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.416113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.416384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.416613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.416654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.416902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.417159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.417378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.417421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.417678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.417862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.417904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.418123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.418166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.418393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.418437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.418644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.418687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.418877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.418919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.419116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.419160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.419389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.419433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.419717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.419762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.420035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.420078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.420306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.420352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.420551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.420593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.420728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.420771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.420908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.420950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.421190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.421367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.421412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.421680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.421880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.421922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.422069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.422115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.422378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.422424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.422550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.422593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.422734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.422776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.898 [2024-11-06 15:43:53.422950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.898 [2024-11-06 15:43:53.422994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.898 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.423213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.423257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.423454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.423499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.423632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.423675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.423982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.424028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.424288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.424335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.424529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.424573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.424882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.425210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.425372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.425416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.425611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.425653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.427275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.427347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.427646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.427697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.427836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.427882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.428072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.428115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.428315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.428362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.428525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.428570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.428771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.428815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.428949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.428994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.429145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.429188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.429456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.429502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.429774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.429818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.429951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.429995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.430196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.430253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.430466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.430510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.430718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.430762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.430908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.430952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.431101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.431145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.431359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.431405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.431652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.431696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.431828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.431872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.432001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.432043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.432218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.432264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.432478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.432517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.432723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.899 [2024-11-06 15:43:53.432998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.899 [2024-11-06 15:43:53.433049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.899 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.433271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.433319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.433557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.433747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.433792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.433997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.434038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.434263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.434311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.434520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.434562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.434777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.434821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.435016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.435059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.435283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.435332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.435454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.435497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.435761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.435805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.436020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.436072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.436231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.436278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.436472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.436515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.436648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.436692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.436873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.436915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.437097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.437248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.437294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.437429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.437472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.437685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.437728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.437949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.437994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.438164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.438218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.438363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.438406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.438671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.438714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.438945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.438990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.439212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.439259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.439459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.439502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.439723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.439766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.439989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.440033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.440315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.440360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.440616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.440659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.900 [2024-11-06 15:43:53.440855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.900 [2024-11-06 15:43:53.440898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.900 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.441123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.441168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.441440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.441485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.441668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.441711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.441902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.442165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.442220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.442368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.442411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.442564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.442613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.442873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.442916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.443138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.443183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.443418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.443461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.443648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.443691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.443860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.443906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.444048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.444092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.444241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.444287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.444475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.444719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.444764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.444975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.445020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.445241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.445287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.445525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.445568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.445829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.445880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.446091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.446136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.446353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.446397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.446532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.446575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.446718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.447024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.447068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.447267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.447311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.447438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.447481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.447632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.447890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.447935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.448155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.448199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.448410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.448453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.448736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.448780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.901 [2024-11-06 15:43:53.449002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.901 [2024-11-06 15:43:53.449048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.901 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.449227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.449273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.449424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.449466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.449672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.449720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.450194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.450247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.450458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.450502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.450702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.450746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.450966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.451010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.451150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.451193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.451394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.451437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.451698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.451741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.451942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.451986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.452126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.452168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.452356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.452407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.452550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.452594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.452813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.452857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.453052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.453095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.453240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.453290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.453430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.453702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.453747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.454039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.454082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.454284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.454329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.454528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.454571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.454705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.454963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.455005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.455277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.455323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.455534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.455584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.455791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.455836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.456045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.456088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.456295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.456339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.456487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.456530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.456750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.456795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.456921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.457171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.457226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.457350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.902 [2024-11-06 15:43:53.457393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.902 qpair failed and we were unable to recover it. 00:39:25.902 [2024-11-06 15:43:53.457663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.457710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.457929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.457984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.458172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.458228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.458487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.458733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.458778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.459052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.459094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.459290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.459333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.459641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.459684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.459833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.459877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.460065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.460107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.460333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.460378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.460531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.460573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.460723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.460767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.460957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.460999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.461134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.461176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.461346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.461388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.461638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.461782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.461947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.461990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.462269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.462312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.462590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.462634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.462791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.462833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.463020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.463062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.463219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.463263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.463423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.463467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.463667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.463709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.463908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.463951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.464145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.464186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.464412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.464547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.464588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.464708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.464750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.464941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.464989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.465233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.465278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.465467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.465510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.903 qpair failed and we were unable to recover it. 00:39:25.903 [2024-11-06 15:43:53.465704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.903 [2024-11-06 15:43:53.465746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.465916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.466252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.466410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.466452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.466589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.466630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.466837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.466878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.467153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.467196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.467354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.467396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.467607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.467649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.467810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.467853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.467986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.468029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.468234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.468277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.468423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.468465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.468582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.468623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.468894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.468940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.469072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.469114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.469304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.469348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.469541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.469724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.469772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.469910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.469950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.470091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.470133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.470337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.470380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.470606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.470651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.470849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.470891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.471110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.471158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.471378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.471420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.471641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.471686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.471876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.471918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.472048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.472090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.472282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.472325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.472470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.472512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.472663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.472718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.472945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.472986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.473113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.473154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.473374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.473421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.473692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.473734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.904 qpair failed and we were unable to recover it. 00:39:25.904 [2024-11-06 15:43:53.473961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.904 [2024-11-06 15:43:53.474003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.474129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.474171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.474405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.474452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.474647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.474689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.474881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.474923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.475112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.475154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.475312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.475356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.475487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.475527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.475679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.475721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.475924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.475965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.476232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.476492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.476533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.476670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.476711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.476840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.476880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.477021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.477064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.477199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.477256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.477448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.477490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.477619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.477658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.477814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.477857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.478143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.478185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.478390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.478588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.478628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.478780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.478821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.478965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.479007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.479221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.479263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.479412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.479453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.479661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.479704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.479828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.479870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.480053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.480100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.480302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.480345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.480581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.905 [2024-11-06 15:43:53.480625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:25.905 qpair failed and we were unable to recover it. 00:39:25.905 [2024-11-06 15:43:53.480838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.480880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.481070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.481111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.481273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.481318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.481471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.481512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.481651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.481692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.481888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.481929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.482115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.482317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.482359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.482558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.482599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.482721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.482757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.482962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.483000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.483216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.187 [2024-11-06 15:43:53.483257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.187 qpair failed and we were unable to recover it. 00:39:26.187 [2024-11-06 15:43:53.483451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.483490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.483697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.483736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.483855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.483893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.484168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.484215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.484348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.484386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.484512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.484550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.484789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.484829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.485939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.485977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.486106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.486143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.486312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.486482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.486519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.486727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.486781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.486965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.487186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.487358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.487584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.487743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.487916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.487954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.488083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.488123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.488376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.488631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.488839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.488881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.489859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.188 [2024-11-06 15:43:53.489897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.188 qpair failed and we were unable to recover it. 00:39:26.188 [2024-11-06 15:43:53.490023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.490061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.490291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.490484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.490523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.490730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.490773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.490963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.491005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.491191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.491243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.491383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.491424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.491551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.491592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.491733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.491776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.491988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.492030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.492199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.492257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.492384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.492426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.492565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.492603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.492729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.492768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.493062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.493299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.493468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.493699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.493857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.493988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.494026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.494221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.494260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.494397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.494434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.494644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.494688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.494901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.494943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.495157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.495200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.495369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.495413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.495558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.495597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.495749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.495786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.495956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.496149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.496187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.496496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.496537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.496751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.496790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.496914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.496957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.497087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.497126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.497344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.497391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.189 [2024-11-06 15:43:53.497576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.189 [2024-11-06 15:43:53.497614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.189 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.497739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.497904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.497944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.498079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.498338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.498378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.498566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.498603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.498832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.498984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.499153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.499401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.499559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.499742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.499901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.499992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.500121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.500161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.500308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.500347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.500483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.500522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.500710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.500749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.500875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.500914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.501108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.501148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.501309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.501350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.501562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.501604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.501804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.501848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.501984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.502026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.502229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.502275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.502429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.502472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.502732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.502788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.502923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.502961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.503092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.503132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.503386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.503425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.503615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.503655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.503776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.503815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.504038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.504078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.504284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.504324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.504589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.504627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.504809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.504848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.505054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.190 [2024-11-06 15:43:53.505093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.190 qpair failed and we were unable to recover it. 00:39:26.190 [2024-11-06 15:43:53.505281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.505321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.505449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.505493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.505620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.505658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.505859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.505899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.506120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.506325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.506367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.506488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.506527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.506722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.506762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.506884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.506923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.507171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.507440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.507479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.507669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.507708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.507896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.507935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.508116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.508154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.508282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.508321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.508525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.508565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.508750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.508789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.508928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.508967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.509242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.509287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.509481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.509520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.509714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.509753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.509930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.510127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.510171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.510386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.510430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.510591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.510634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.510848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.510887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.511019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.511057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.511255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.511297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.511552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.511591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.511720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.511758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.512007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.512049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.512190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.512243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.512506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.512550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.512695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.512738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.512999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.513042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.191 [2024-11-06 15:43:53.513259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.191 [2024-11-06 15:43:53.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.191 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.513461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.513699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.513741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.513932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.513974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.514140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.514184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.514365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.514434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.514660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.514714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.514906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.514949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.515160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.515216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.515360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.515403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.515549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.515592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.515741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.515785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.515985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.516029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.516228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.516273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.516554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.516597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.516736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.516779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.516919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.517244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.517288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.517590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.517634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.517831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.517874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.518085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.518129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.518278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.518320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.518466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.518508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.518761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.518802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.519038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.519277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.519320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.192 [2024-11-06 15:43:53.519462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.192 [2024-11-06 15:43:53.519503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.192 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.519706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.519749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.519954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.519997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.520216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.520261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.520399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.520442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.520721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.520764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.520901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.520945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.521157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.521200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.521419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.521463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.521599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.521641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.521852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.521895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.522115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.522156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.522414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.522458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.522595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.522636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.522862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.522905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.523120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.523161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.523374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.523418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.523637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.523875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.523919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.524130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.524173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.524410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.524465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.524618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.524661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.524802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.524844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.525051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.525092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.525289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.525333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.525538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.525581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.525851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.525894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.526085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.526128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.526287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.526331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.526526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.526569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.526720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.526886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.526927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.527079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.527121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.527266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.527309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.193 [2024-11-06 15:43:53.527621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.193 [2024-11-06 15:43:53.527664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.193 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.527866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.527909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.528066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.528108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.528331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.528375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.528585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.528814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.528856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.528988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.529030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.529310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.529355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.529623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.529669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.529811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.529865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.530079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.530122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.530270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.530312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.530464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.530507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.530730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.530773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.530905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.530947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.531090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.531132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.531372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.531418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.531699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.531741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.531869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.531911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.532107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.532149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.532450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.532495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.532697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.533022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.533065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.533232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.533277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.533410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.533455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.533633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.533675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.533806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.533853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.534047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.534089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.534305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.534352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.534492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.534535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.534669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.534712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.534917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.534960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.535104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.535146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.535371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.535413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.535628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.535670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.535872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.535913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.536059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.194 [2024-11-06 15:43:53.536102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.194 qpair failed and we were unable to recover it. 00:39:26.194 [2024-11-06 15:43:53.536245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.536290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.536497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.536541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.536722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.536877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.536919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.537058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.537099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.537290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.537334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.537520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.537562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.537703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.537886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.537927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.538220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.538266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.538520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.538562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.538701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.538744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.538867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.538909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.539050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.539094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.539245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.539430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.539625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.539667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.539862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.540119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.540161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.540308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.540355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.540478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.540519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.540827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.540871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.541937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.541981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.542199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.542250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.542374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.542423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.542622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.542664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.195 [2024-11-06 15:43:53.542812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.195 [2024-11-06 15:43:53.542855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.195 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.543938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.543977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.544134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.544309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.544630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.544797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.544995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.545186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.545434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.545619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.545776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.545953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.545990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.546109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.546147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.546349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.546390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.546521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.546561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.546748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.546788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.546914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.546951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.547090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.547129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.547342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.547388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.547624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.547668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.547886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.547928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.548073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.548116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.548256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.548300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.548489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.548532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.548668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.548706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.548835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.548873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.196 qpair failed and we were unable to recover it. 00:39:26.196 [2024-11-06 15:43:53.549871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.196 [2024-11-06 15:43:53.549910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.550958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.551162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.551232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.551356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.551395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.551523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.551563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.551762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.551800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.551938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.551978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.552090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.552129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.552328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.552368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.552552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.552591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.552726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.552766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.552992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.553166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.553355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.553590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.553753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.553953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.554153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.554188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.554326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.554363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.554494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.554529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.554648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.197 [2024-11-06 15:43:53.554684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.197 qpair failed and we were unable to recover it. 00:39:26.197 [2024-11-06 15:43:53.554812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.554847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.554968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.555116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.555370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.555736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.555901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.555950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.556060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.556097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.556226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.556263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.556378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.556414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.556587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.556622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.556800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.556837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.557961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.557998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.558139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.558174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.558310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.558346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.558460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.558496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.558613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.558650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.558777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.558812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.559949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.559984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.560223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.560261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.560468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.560505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.560771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.560827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.561009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.561152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.561195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.561418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.561461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.561654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.198 [2024-11-06 15:43:53.561697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.198 qpair failed and we were unable to recover it. 00:39:26.198 [2024-11-06 15:43:53.561895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.561931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.562122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.562159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.562358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.562395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.562580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.562615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.562729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.562765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.562957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.562993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.563195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.563260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.563436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.563655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.563690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.563818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.563854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.563970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.564005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.564147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.564181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.564415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.564448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.564636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.564679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.564871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.565132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.565175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.565392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.565435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.565620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.565807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.565850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.566050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.566099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.566239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.566284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.566679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.566721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.566965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.567008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.567218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.567262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.567393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.567436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.567643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.567686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.567807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.567850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.568228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.568379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.568519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.568732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.568910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.568943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.569061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.569103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.569238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.569272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.199 qpair failed and we were unable to recover it. 00:39:26.199 [2024-11-06 15:43:53.569509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.199 [2024-11-06 15:43:53.569541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.569714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.569748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.569852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.569885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.570863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.570896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.571044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.571211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.571431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.571605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.571795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.571999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.572041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.572247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.572292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.572544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.572577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.572715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.572748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.572978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.573022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.573224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.573269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.573504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.573733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.573777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.574166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.574244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.574449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.574705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.574747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.575051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.575094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.575286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.575330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.575464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.575507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.575772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.575814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.576028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.576072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.576199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.576454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.576498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.576639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.576683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.576887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.576930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.577074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.577116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.577308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.577503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.577546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.200 qpair failed and we were unable to recover it. 00:39:26.200 [2024-11-06 15:43:53.577677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.200 [2024-11-06 15:43:53.577720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.577997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.578039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.578248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.578293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.578540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.578586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.578904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.579105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.579358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.579402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.579561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.579605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.579750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.579792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.579986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.580028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.580229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.580273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.580430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.580473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.580610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.580658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.581133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.581374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.581623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.581666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.581867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.581909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.582170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.582224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.582382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.582426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.582577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.582619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.582747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.582789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.582935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.582977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.583134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.583178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.583495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.583557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.583753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.583796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.584043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.584239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.584421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.584671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.584844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.584992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.585036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.585176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.585230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.585454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.585497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.585692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.585734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.585881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.585925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.586123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.201 [2024-11-06 15:43:53.586165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.201 qpair failed and we were unable to recover it. 00:39:26.201 [2024-11-06 15:43:53.586444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.586489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.586639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.586681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.586915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.586959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.587151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.587193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.587357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.587401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.587651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.587785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.587828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.588029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.588073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.588286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.588331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.588467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.588509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.588733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.588777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.588910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.588952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.589155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.589197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.589357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.589398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.589554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.589598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.589875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.589923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.590067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.590237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.590282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.590409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.590452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.590578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.590620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.590854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.590898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.591024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.591064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.591197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.591260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.591478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.591519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.591718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.591762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.591972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.592013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.592141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.592184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.592382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.592425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.592572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.592616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.592749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.202 [2024-11-06 15:43:53.592792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.202 qpair failed and we were unable to recover it. 00:39:26.202 [2024-11-06 15:43:53.593061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.593104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.593321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.593365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.593487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.593528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.593655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.593698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.593842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.593885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.594025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.594068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.594302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.594345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.594467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.594510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.594800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.594840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.594967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.595142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.595380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.595623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.595777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.595951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.595991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.596219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.596348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.596388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.596573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.596612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.596761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.596799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.597046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.597085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.597277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.597319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.597459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.597702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.597839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.598040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.598084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.598199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.598280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.598485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.598529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.598684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.598726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.598920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.598963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.599089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.599132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.599332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.599375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.599504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.599546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.599685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.599724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.599852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.599890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.600095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.600135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.203 qpair failed and we were unable to recover it. 00:39:26.203 [2024-11-06 15:43:53.600340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.203 [2024-11-06 15:43:53.600380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.600517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.600557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.600751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.600789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.600906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.600945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.601194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.601384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.601551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.601711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.601862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.601982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.602020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.602142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.602181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.602382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.602421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.602609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.602648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.602800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.602839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.603034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.603073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.603256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.603296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.603484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.603522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.603669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.603710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.603905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.603943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.604079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.604118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.604248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.604288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.604551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.604590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.604705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.604744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.604860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.604899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.605030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.605068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.605222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.605261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.605507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.605545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.605726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.605762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.606134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.606172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.606305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.606346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.606534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.606570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.606679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.606715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.606835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.606869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.204 [2024-11-06 15:43:53.607064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.204 [2024-11-06 15:43:53.607098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.204 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.607228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.607263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.607445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.607480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.607611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.607835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.607871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.608114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.608151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.608279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.608315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.608512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.608549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.608734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.608772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.608958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.608993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.609192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.609240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.609366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.609403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.609689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.609813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.609850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.610035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.610071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.610232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.610273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.610492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.610540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.610792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.610830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.610950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.610986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.611193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.611241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.611369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.611514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.611551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.611739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.611973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.612130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.612370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.612516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.612748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.612901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.612939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.613132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.613167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.613303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.613338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.613542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.613580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.613701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.613737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.613915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.613952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.614135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.614170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.614366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.614404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.614604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.614645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.614820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.205 [2024-11-06 15:43:53.614857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.205 qpair failed and we were unable to recover it. 00:39:26.205 [2024-11-06 15:43:53.615056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.615091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.615291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.615329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.615554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.615591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.615714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.615750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.615954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.615990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.616136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.616173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.616447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.616536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.616829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.616976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.617022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.617234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.617280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.617429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.617474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.617682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.617727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.617862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.617907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.618128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.618173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.618394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.618432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.618597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.618633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.618826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.618863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.618998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.619147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.619374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.619556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.619721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.619896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.619931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.620041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.620078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.620200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.620504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.620542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.620759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.620930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.620966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.621155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.621192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.621421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.621660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.621696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.621939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.621975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.622152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.622189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.622503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.622539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.622720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.622756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.622950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.622987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.623181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.623240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.623447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.206 [2024-11-06 15:43:53.623487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.206 qpair failed and we were unable to recover it. 00:39:26.206 [2024-11-06 15:43:53.623696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.623738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.623868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.623905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.624084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.624120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.624308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.624347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.624485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.624521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.624659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.624702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.624893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.624935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.625138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.625183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.625393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.625444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.625590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.625833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.625877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.626086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.626130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.626399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.626445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.626658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.626702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.626917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.626963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.627136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.627183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.627380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.627426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.627628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.627673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.627874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.627919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.628108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.628152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.628375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.628422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.628628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.628670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.628901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.628944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.629115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.629159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.629328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.629390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.629609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.629652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.629841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.629884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.630107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.630151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.207 [2024-11-06 15:43:53.630417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.207 [2024-11-06 15:43:53.630462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.207 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.630661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.630705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.630910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.630954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.631181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.631238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.631517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.631561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.631706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.631749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.632029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.632071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.632260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.632307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.632464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.632508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.632705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.632749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.632876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.633166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.633330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.633393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.633605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.633651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.633913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.634039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.634082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.634294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.634341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.634485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.634530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.634743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.634787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.634933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.634978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.635120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.635164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.635327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.635374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.635645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.635689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.635836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.635881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.636079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.636123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.636279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.636327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.636621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.636669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.636814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.636856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.637059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.637112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.637352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.637400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.637611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.637667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.637819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.637863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.638090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.638133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.638422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.638660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.638704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.638910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.638954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.639151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.639194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.208 qpair failed and we were unable to recover it. 00:39:26.208 [2024-11-06 15:43:53.639424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.208 [2024-11-06 15:43:53.639469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.639730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.639773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.639985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.640031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.640192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.640253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.640467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.640512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.640652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.640696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.640846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.640889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.641098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.641318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.641365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.641509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.641553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.641780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.641823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.642017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.642060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.642219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.642265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.642420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.642464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.642650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.642694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.642827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.642880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.643103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.643150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.643366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.643411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.643635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.643678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.643821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.643864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.644063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.644108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.644310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.644355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.644510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.644691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.644735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.644873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.644917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.645050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.645093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.645248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.645293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.645489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.645533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.645679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.645723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.645937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.645981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.646169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.646228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.646450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.646498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.646632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.646677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.646823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.646867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.647025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.647071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.647284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.647329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.647539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.647584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.647707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.647752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.648047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.648091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.648280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.648326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.648536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.648581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.209 qpair failed and we were unable to recover it. 00:39:26.209 [2024-11-06 15:43:53.648711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.209 [2024-11-06 15:43:53.648753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.648890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.648936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.649090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.649133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.649344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.649396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.649528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.649571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.649718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.649762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.649895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.649939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.650164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.650215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.650410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.650455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.650661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.650703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.650842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.650886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.651097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.651141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.651361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.651406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.651529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.651572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.651707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.651756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.651918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.651963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.652086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.652144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.652328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.652376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.652532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.652577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.652726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.652770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.652894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.652940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.653137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.653180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.653380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.653425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.653678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.653726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.653929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.653975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.654171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.654230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.654362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.654408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.654614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.654660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.654888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.654935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.655085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.655124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.656570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.656634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.656883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.656928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.657189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.657255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.657463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.657507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.657651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.657691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.657842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.657887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.658163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.658217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.658359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.658405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.658622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.658669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.658830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.658874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.659000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.659040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.659303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.659344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.659486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.659526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.659736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.659778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.660037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.660078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.660265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.660305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.660528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.660568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.660716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.660901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.660942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.661138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.661178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.661335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.661375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.661510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.661551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.661824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.661865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.661985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.662025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.662177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.662235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.662373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.662413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.662610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.662652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.662863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.662904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.663043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.663082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.663292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.663334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.663496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.663537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.663670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.663710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.663894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.663934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.664193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.664242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.664449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.664490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.664700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.664739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.664864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.664906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.665064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.665103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.665245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.665283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.665412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.665449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.665696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.210 [2024-11-06 15:43:53.665732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.210 qpair failed and we were unable to recover it. 00:39:26.210 [2024-11-06 15:43:53.665916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.665952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.666163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.666199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.666329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.666366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.666499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.666536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.666738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.666775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.666886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.666925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.667055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.667092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.667226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.667264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.667417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.667459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.667669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.667731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.668031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.668078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.668219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.668265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.668416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.668460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.668738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.668782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.670172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.670284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.670515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.670555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.670699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.670738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.670928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.671226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.671265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.671453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.671490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.671613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.671650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.671891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.671929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.672065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.672104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.672255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.672544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.672724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.672763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.672902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.672936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.673143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.673178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.673319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.673355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.673479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.673516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.673690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.673812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.673852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.674046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.674082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.674318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.674509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.674546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.674772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.674808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.674951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.674990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.675193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.675238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.675416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.675566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.675601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.675775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.675809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.675933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.675967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.676164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.676197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.676388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.676421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.676562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.676598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.676734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.676771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.676945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.676979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.677155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.677192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.677346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.677382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.677537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.677656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.677689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.677935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.677978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.678109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.678144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.678324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.678357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.678529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.678565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.678808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.678840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.679144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.679371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.679540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.679678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.679845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.679965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.680001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.680246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.680279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.211 [2024-11-06 15:43:53.680410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.211 [2024-11-06 15:43:53.680449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.211 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.680630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.680665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.680781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.680814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.680997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.681214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.681389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.681534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.681765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.681908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.681940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.682193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.682425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.682468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.682710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.682988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.683140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.683304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.683447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.683669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.683886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.683919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.684924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.684960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.685062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.685094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.685287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.685499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.685532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.685637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.685670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.685801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.686950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.686983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.687090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.687123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.687250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.687285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.687472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.687506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.687697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.687733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.688006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.688042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.688222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.688261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.688381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.688415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.688553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.688588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.688722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.688755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.689921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.689956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.690146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.690180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.690375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.690411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.690609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.690646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.690829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.690866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.690998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.691033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.691225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.691261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.691458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.691492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.691680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.691842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.691877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.692149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.692186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.692385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.692420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.692618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.692654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.692897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.692931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.693058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.693091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.693242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.693279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.693533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.693571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.693773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.693814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.694079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.694119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.694380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.212 [2024-11-06 15:43:53.694421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.212 qpair failed and we were unable to recover it. 00:39:26.212 [2024-11-06 15:43:53.694538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.694576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.694709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.694743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.694893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.695160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.695196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.695348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.695384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.695566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.695619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.695869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.695909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.696041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.696078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.696225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.696271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.696405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.696441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.696687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.696722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.696909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.696948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.697150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.697195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.697478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.697516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.697798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.697835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.697966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.698002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.698130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.698169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.698306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.698344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.698612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.698650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.698767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.698805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.699057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.699095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.699345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.699385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.699603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.699641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.699829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.699869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.700154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.700191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.700390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.700526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.700563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.700776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.700813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.700951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.700988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.701109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.701146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.701344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.701386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.701518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.701556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.701700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.701737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.701932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.701968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.702160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.702196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.702400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.702681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.702869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.702907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.703096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.703144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.703353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.703390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.703596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.703634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.703762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.213 [2024-11-06 15:43:53.703801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.213 qpair failed and we were unable to recover it. 00:39:26.213 [2024-11-06 15:43:53.704096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.704133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.704340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.704381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.704510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.704549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.704748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.704785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.704972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.705008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.705213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.705252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.705385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.705421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.705550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.705588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.705889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.705929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.706063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.706102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.706321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.706363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.706559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.706598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.706731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.706771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.706914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.706960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.707151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.707190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.707386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.707427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.707631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.707671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.707875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.707913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.708142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.708182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.708402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.708444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.708722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.708762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.708982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.709023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.709320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.709366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.709558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.709598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.709731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.709770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.709957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.709996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.710198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.710250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.710388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.710449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.710713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.710753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.710955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.710995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.711185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.711234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.711419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.711459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.711591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.711632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.711885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.711926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.712064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.712104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.712299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.712341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.712465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.712508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.712695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.712735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.712873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.712913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.713111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.713151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.713287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.713327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.713528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.713567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.713690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.713731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.713858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.713896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.714034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.714074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.714271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.714311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.714501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.714542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.714747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.714788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.715055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.715094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.715310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.715351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.715565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.715604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.715866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.715909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.716149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.716190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.716330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.716379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.716536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.716577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.716808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.716847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.717058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.717097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.717294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.717343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.717491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.717535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.717804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.717850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.718050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.718091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.718352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.718394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.718616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.718657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.718870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.718912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.719102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.719141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.719273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.719314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.719548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.719590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.719774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.719814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.214 [2024-11-06 15:43:53.719946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.214 [2024-11-06 15:43:53.719986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.214 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.720212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.720257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.720428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.720468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.720592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.720632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.720823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.720862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.721062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.721105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.721340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.721382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.721598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.721839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.721886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.722023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.722062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.722331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.722378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.722580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.722619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.722813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.722860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.723082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.723328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.723369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.723589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.723629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.723774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.723817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.723953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.723992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.724238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.724282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.724543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.724587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.724795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.724834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.724987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.725031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.725182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.725236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.725464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.725558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.725837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.725878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.726138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.726180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.726425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.726471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.726674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.726717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.726931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.726975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.727252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.727300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.727568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.727615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.727877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.727921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.728215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.728263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.728513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.728569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.728783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.728826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.729117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.729164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.729354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.729399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.729607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.729650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.729928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.729973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.730180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.730235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.730449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.730493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.730712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.730755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.730920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.730965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.731200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.731256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.731402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.731451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.731656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.731700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.731991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.732219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.732265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.732461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.732511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.732770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.732814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.733044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.733093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.733302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.733348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.733500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.733547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.733815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.733857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.734141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.734193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.734429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.734480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.734705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.734753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.734966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.735012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.735296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.735344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.735551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.735595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.735865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.735909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.736047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.736091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.736274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.736325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.736508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.736551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.736694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.736738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.736884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.736931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.737147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.737192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.737405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.737453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.215 [2024-11-06 15:43:53.737650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.215 [2024-11-06 15:43:53.737695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.215 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.737911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.737958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.738094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.738138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.738300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.738347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.738493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.738744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.738802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.739077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.739129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.739350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.739393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.739540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.739583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.739724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.739777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.739991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.740035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.740294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.740562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.740783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.740828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.741095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.741141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.741363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.741408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.741531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.741581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.741734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.741782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.742022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.742069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.742281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.742352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.742556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.742607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.742790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.742962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.743008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.743296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.743344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.743605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.743649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.743846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.743890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.744041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.744085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.744226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.744271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.744473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.744516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.744664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.744707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.744864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.744908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.745099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.745143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.745359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.745405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.745605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.745649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.745863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.745915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.746126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.746180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.746335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.746379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.746515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.746711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.746765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.747056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.747101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.747306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.747353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.747564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.747611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.747832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.747876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.748009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.748062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.748189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.748252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.748404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.748445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.748671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.748717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.748988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.749036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.749184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.749240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.749435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.749605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.749650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.749775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.750099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.750142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.750296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.750342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.750554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.750598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.750765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.751026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.751384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.751706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.751852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.751896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.752038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.752089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.752239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.752285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.752557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.752600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.216 [2024-11-06 15:43:53.752814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.216 [2024-11-06 15:43:53.752857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.216 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.753052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.753103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.753249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.753294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.753445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.753488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.753685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.753728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.753987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.754029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.754171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.754437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.754491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.754654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.754698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.754839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.754881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.755161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.755215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.755380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.755424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.755616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.755661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.755944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.756182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.756234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.756386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.756430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.756726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.756770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.756968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.757012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.757225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.757269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.757529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.757580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.757792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.757849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.758058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.758102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.758360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.758616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.758662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.758941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.758986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.759189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.759246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.759388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.759430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.759584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.759629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.759909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.759952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.760234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.760278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.760556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.760600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.760792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.760935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.760980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.761218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.761477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.761521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.761721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.761770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.761942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.761988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.762193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.762255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.762381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.762423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.762702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.762747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.763055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.763106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.763317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.763374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.763635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.763679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.763903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.763950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.764148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.764193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.764423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.764467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.764671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.764716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.764922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.764971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.765187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.765242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.765521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.765566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.765804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.765848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.766110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.766156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.766380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.766425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.766572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.766616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.766820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.766866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.767085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.767129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.767406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.767656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.767702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.767842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.767885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.768011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.768059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.768189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.768248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.768380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.768425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.768637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.768680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.768943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.768990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.769276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.769471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.769515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.769794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.769847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.770050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.217 [2024-11-06 15:43:53.770096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.217 qpair failed and we were unable to recover it. 00:39:26.217 [2024-11-06 15:43:53.770291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.770339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.770486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.770530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.770747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.770792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.771041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.771092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.771271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.771317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.771518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.771562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.771832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.772057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.772103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.772231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.772276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.772611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.772664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.772861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.772907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.773135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.773183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.773472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.773516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.773761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.773805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.774091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.774135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.774295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.774341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.774500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.774564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.774827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.774871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.775095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.775143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.775375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.775689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.775736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.775881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.775926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.776118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.776165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.776451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.776498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.776782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.776832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.777041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.777088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.777285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.777330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.777485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.777530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.777658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.777700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.777924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.777968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.778158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.778210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.778425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.778472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.778679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.778724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.778867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.778912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.779122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.779165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.779439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.779487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.779672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.779719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.780051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.780095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.780294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.780340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.780488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.780532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.780767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.780810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.781015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.781059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.781269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.781314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.781753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.781798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.781929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.781972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.782272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.782319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.782617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.782662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.782801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.782845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.783100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.783149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.783402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.783446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.783658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.783703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.783906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.783950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.784137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.784181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.784387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.784529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.784572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.784765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.784809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.784977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.785023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.785230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.785275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.785477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.785522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.785727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.785771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.785913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.785956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.786094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.786137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.786372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.786419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.786552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.786596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.786787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.786829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.786953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.786996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.218 [2024-11-06 15:43:53.787291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.218 [2024-11-06 15:43:53.787338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.218 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.787629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.787671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.787924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.787969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.788238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.788281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.788430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.788474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.788681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.788725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.789034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.789079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.789346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.789391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.789624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.789668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.789879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.789925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.790131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.790182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.790480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.790525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.790692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.790738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.790937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.790999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.791254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.791299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.791434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.791484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.791647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.791692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.791991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.792037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.792191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.792529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.792576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.792880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.792924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.793129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.793174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.793407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.793461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.793725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.793770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.793978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.794022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.794173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.794458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.794508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.794775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.794819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.795028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.795228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.795274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.795576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.795621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.795825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.795871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.796009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.796059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.796218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.796263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.796430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.796472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.796680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.796727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.797018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.797066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.797268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.797315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.797453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.797498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.797720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.797764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.797977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.798020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.798230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.798278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.798492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.798540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.798751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.798795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.799013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.799057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.799313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.799359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.799549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.799601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.799833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.799880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.800039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.800084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.800233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.800278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.800605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.800649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.800848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.800891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.801124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.801168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.219 qpair failed and we were unable to recover it. 00:39:26.219 [2024-11-06 15:43:53.801309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.219 [2024-11-06 15:43:53.801353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.495 qpair failed and we were unable to recover it. 00:39:26.495 [2024-11-06 15:43:53.801499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.801545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.801688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.801733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.801863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.801907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.802041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.802085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.802295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.802340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.802495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.802540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.802833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.802878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.803088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.803131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.803424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.803475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.803744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.803787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.803996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.804040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.804327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.804370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.804504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.804545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.804675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.804715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.804926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.804968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.805183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.805237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.805384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.805425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.805682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.805725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.805995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.806037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.806251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.806294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.806457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.806500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.806636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.806685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.806977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.807020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.807241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.807297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.807556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.807599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.807799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.496 [2024-11-06 15:43:53.807840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.496 qpair failed and we were unable to recover it. 00:39:26.496 [2024-11-06 15:43:53.808109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.808385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.808429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.808625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.808666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.808960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.809167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.809219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.809364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.809406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.809599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.809640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.809771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.809812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.810078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.810122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.810334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.810379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.810569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.810613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.810808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.810852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.810981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.811024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.811163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.811220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.811414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.811457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.811715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.811758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.812029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.812073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.812364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.812411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.812634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.812677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.812884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.812928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.813169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.813220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.813425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.813468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.813724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.813773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.813976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.814021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.814300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.814345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.814549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.814593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.497 qpair failed and we were unable to recover it. 00:39:26.497 [2024-11-06 15:43:53.814737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.497 [2024-11-06 15:43:53.814779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.814986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.815029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.815324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.815369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.815636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.815681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.815898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.815941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.816266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.816564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.816621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.816885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.816929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.817071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.817118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.817298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.817608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.817654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.817886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.817934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.818068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.818124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.818327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.818372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.818588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.818631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.818881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.818925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.819221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.819270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.819427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.819475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.819678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.819722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.819950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.819994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.820227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.820273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.820421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.820467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.820810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.820857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.821152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.821259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.821514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.821562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.821849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.821894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.498 [2024-11-06 15:43:53.822183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.498 [2024-11-06 15:43:53.822245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.498 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.822503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.822547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.822685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.822729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.822924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.822967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.823160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.823217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.823367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.823410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.823554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.823598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.823802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.823844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.823979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.824023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.824244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.824291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.824490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.824536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.824740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.824785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.824931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.824976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.825199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.825252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.825511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.825554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.825697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.825939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.825983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.826249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.826296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.826495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.826538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.826743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.826786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.827020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.827063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.827273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.827318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.827512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.827555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.827780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.827823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.827966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.828010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.828268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.828313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.828542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.828594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.828793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.828839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.828984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.499 [2024-11-06 15:43:53.829028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.499 qpair failed and we were unable to recover it. 00:39:26.499 [2024-11-06 15:43:53.829233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.829292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.829496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.829540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.829689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.829731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.829902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.830036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.830080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.830366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.830411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.830629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.830671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.830884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.830928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.831132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.831182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.831421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.831465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.831626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.831671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.831889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.831934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.832157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.832216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.832449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.832493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.832682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.832725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.832971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.833015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.833283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.833328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.833519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.833708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.833958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.834000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.834283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.834327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.834633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.834676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.834941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.834985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.835223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.835415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.835459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.835761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.835803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.836062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.836105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.836318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.836364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.836499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.836542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.836775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.836819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.836956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.837000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.837227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.837271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.837492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.837536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.837735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.837779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.838008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.838050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.838282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.838528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.838571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.500 [2024-11-06 15:43:53.838832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.500 [2024-11-06 15:43:53.838876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.500 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.839066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.839109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.839273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.839463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.839507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.839702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.839745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.839881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.839923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.840049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.840093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.840282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.840326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.840486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.840529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.840729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.840773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.840923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.840967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.841104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.841154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.841371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.841417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.841566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.841869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.841913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.842171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.842227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.842440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.842485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.842740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.842783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.842979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.843022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.843233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.843278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.843416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.843460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.843649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.843693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.843913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.843956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.844148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.844391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.844438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.844667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.844710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.844897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.844941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.845090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.845134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.845364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.845409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.845598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.845844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.846077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.846121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.846267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.501 [2024-11-06 15:43:53.846311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.501 qpair failed and we were unable to recover it. 00:39:26.501 [2024-11-06 15:43:53.846525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.846569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.846767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.846811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.846970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.847013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.847227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.847552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.847597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.847871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.847914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.848061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.848107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.848247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.848290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.848412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.848456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.848685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.848730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.848864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.848907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.849099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.849141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.849369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.849414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.849542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.849586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.849814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.849856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.850045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.850089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.850369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.850415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.850560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.850602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.850758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.850808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.851040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.851083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.851310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.851354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.851491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.851535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.851673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.851718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.851907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.851950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.852246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.852290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.852552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.852595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.852771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.853007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.853051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.853239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.853283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.853472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.853516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.853715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.853759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.853966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.854009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.854223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.854267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.854498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.854541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.854798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.855079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.855123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.855311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.855355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.502 qpair failed and we were unable to recover it. 00:39:26.502 [2024-11-06 15:43:53.855746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.502 [2024-11-06 15:43:53.855789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.855967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.856099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.856143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.856385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.856430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.856571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.856614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.856811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.856855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.856990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.857032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.857248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.857294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.857553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.857597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.857802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.857846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.858133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.858177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.858476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.858519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.858731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.859029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.859071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.859270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.859315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.859626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.859669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.859925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.859974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.860224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.860268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.860414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.860457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.860733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.860776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.861036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.861084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.861356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.861401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.861525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.861568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.861778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.861820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.862041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.862084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.862296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.862340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.862528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.862571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.862808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.862851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.863058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.863101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.863302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.863346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.863484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.863527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.863727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.863770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.863961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.864152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.864194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.864469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.864513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.864738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.865045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.865087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.865290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.865339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.503 [2024-11-06 15:43:53.865630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.503 [2024-11-06 15:43:53.865674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.503 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.865823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.865866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.866083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.866126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.866382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.866426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.866618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.866661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.866875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.866917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.867146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.867188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.867407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.867450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.867713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.867755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.867997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.868040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.868179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.868233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.868463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.868505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.868711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.868755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.869040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.869082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.869376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.869421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.869704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.869747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.869953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.869996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.870189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.870240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.870532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.870575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.870699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.870742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.870965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.871008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.871250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.871294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.871513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.871563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.871781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.871824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.871973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.872016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.872302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.872346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.872571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.872614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.872757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.872800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.873125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.873167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.873403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.873445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.873633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.873675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.873897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.873939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.874078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.874120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.874419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.874464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.874659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.874701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.875003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.875155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.875198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.875399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.875442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.875756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.504 [2024-11-06 15:43:53.875799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.504 qpair failed and we were unable to recover it. 00:39:26.504 [2024-11-06 15:43:53.876055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.876098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.876322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.876366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.876570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.876895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.876937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.877168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.877512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.877555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.877691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.877734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.878021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.878063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.878330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.878375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.878524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.878568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.878836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.878879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.879081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.879124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.879341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.879384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.879597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.879875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.879918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.880230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.880371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.880413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.880605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.880647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.880865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.881186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.881248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.881456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.881498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.881703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.881747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.881972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.882161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.882219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.882481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.882523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.882794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.882837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.883047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.883089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.883240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.883284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.883442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.883485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.883710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.883752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.883951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.883993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.884255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.884297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.884501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.884543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.884697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.884739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.884992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.885035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.885316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.505 [2024-11-06 15:43:53.885359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.505 qpair failed and we were unable to recover it. 00:39:26.505 [2024-11-06 15:43:53.885640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.885683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.885972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.886016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.886216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.886259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.886552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.886594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.886887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.886929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.887076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.887119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.887288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.887333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.887533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.887575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.887853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.887896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.888170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.888385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.888428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.888585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.888629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.888888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.888930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.889125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.889167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.889525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.889723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.889766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.889901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.889944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.890177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.890231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.890449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.890492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.890613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.890863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.890907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.891130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.891173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.891389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.891433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.891648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.891690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.891877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.891919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.892115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.892157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.892314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.892358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.892570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.892822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.892865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.893003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.893045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.893255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.893299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.893527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.893576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.893728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.893771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.506 [2024-11-06 15:43:53.893906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.506 [2024-11-06 15:43:53.893947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.506 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.894227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.894271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.894413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.894455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.894667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.894709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.894835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.894878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.895229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.895453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.895495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.895780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.895823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.895986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.896029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.896294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.896338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.896545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.896588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.896716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.896759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.896959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.897002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.897195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.897255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.897532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.897574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.897832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.897874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.898077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.898119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.898415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.898460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.898721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.898764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.898935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.899147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.899191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.899408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.899452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.899652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.899694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.899891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.899933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.900195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.900408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.900451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.900581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.900623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.900827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.900870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.901063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.901105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.901305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.901350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.901612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.901654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.901880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.901921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.902129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.902172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.902340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.902382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.902575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.902629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.902859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.902901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.903090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.903133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.903338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.903381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.507 [2024-11-06 15:43:53.903590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.507 [2024-11-06 15:43:53.903633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.507 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.903844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.903886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.904075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.904119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.904318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.904363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.904584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.904628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.904875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.905072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.905116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.905310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.905664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.905706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.905988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.906031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.906162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.906213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.906339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.906382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.906588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.906631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.906908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.906950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.907158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.907207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.907347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.907390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.907602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.907645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.907852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.907894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.908044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.908282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.908325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.908472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.908513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.908799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.908841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.909107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.909149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.909430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.909474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.909683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.909726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.910040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.910084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.910365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.910409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.910713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.910756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.911015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.911058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.911280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.911323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.911621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.911665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.911895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.911938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.912149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.912191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.912498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.912541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.912730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.912773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.912972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.913292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.913342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.913602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.913645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.913853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.508 [2024-11-06 15:43:53.913896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.508 qpair failed and we were unable to recover it. 00:39:26.508 [2024-11-06 15:43:53.914098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.914140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.914428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.914612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.914653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.914849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.914893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.915109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.915153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.915443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.915487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.915680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.915722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.915936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.915980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.916170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.916223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.916426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.916470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.916659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.916701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.916916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.916959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.917233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.917277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.917487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.917530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.917824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.918058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.918101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.918297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.918342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.918620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.918662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.918945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.918988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.919196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.919247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.919452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.919494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.919776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.919818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.920081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.920243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.920287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.920560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.920605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.920731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.920774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.920987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.921029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.921229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.921485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.921528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.921725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.921769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.921978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.922020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.922231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.922277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.922531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.922788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.922831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.923040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.923083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.923311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.923355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.923639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.509 [2024-11-06 15:43:53.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.509 qpair failed and we were unable to recover it. 00:39:26.509 [2024-11-06 15:43:53.923942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.923991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.924199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.924252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.924464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.924507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.924657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.924700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.924989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.925031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.925254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.925299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.925490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.925533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.925814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.925856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.925992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.926034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.926241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.926430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.926664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.926706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.926921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.926962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.927179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.927231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.927448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.927491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.927630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.927671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.927862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.927903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.928090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.928132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.928418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.928461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.928652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.928695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.928901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.928944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.929131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.929173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.929385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.929428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.929637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.929681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.929888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.930216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.930411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.930454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.930725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.930769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.930918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.930960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.931148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.931191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.931406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.931449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.931729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.931771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.931966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.932008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.510 [2024-11-06 15:43:53.932231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.510 [2024-11-06 15:43:53.932275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.510 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.932485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.932529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.932682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.932724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.932922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.932965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.933227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.933270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.933486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.933530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.933753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.933795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.933990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.934039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.934192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.934247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.934533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.934575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.934777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.934820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.935334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.935378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.935661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.935703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.935846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.935889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.936089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.936342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.936386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.936698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.936894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.936937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.937141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.937184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.937415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.937457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.937603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.937646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.937777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.937818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.938088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.938135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.938427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.938470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.938675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.938716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.938973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.939017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.939156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.939198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.939432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.939474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.939655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.939850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.940052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.940093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.940266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.940311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.940510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.940552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.940773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.940817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.941005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.941045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.941166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.941214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.941404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.941444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.511 [2024-11-06 15:43:53.941647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.511 [2024-11-06 15:43:53.941688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.511 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.941811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.942107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.942149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.942316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.942358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.942502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.942545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.942684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.942724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.942857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.942898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.943125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.943166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.943399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.943442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.943653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.943700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.943923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.943963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.944246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.944290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.944562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.944604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.944795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.944837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.945043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.945087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.945318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.945573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.945614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.945804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.945845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.946125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.946166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.946392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.946435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.946660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.946833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.946873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.947077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.947118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.947312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.947355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.947491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.947532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.947756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.947797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.948064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.948104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.948309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.948351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.948576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.948618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.948822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.948864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.949144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.949187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.949342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.949383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.949665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.949709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.950019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.950062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.950336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.950378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.950504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.950547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.950739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.950782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.951040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.951083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.512 [2024-11-06 15:43:53.951307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.512 [2024-11-06 15:43:53.951349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.512 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.951619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.951896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.951938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.952145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.952187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.952442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.952484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.952789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.952830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.953033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.953075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.953228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.953272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.953422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.953464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.953652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.953693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.953906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.953955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.954173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.954224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.954460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.954503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.954802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.954844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.955106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.955149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.955356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.955398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.955732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.955951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.955994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.956136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.956417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.956459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.956717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.956759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.957060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.957328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.957371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.957653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.957694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.957855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.957895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.958090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.958132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.958349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.958394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.958655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.958697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.958953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.958996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.959130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.959171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.959320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.959364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.959520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.959560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.959852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.959895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.960098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.960140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.960363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.960406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.960609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.960651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.960935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.960977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.961167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.961378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.513 [2024-11-06 15:43:53.961426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.513 qpair failed and we were unable to recover it. 00:39:26.513 [2024-11-06 15:43:53.961615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.961656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.961804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.962060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.962101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.962250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.962292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.962553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.962597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.962740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.962783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.962989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.963032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.963263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.963305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.963515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.963558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.963824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.963865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.964065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.964107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.964248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.964291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.964573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.964615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.964813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.964857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.965067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.965109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.965375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.965422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.965637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.965680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.965963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.966005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.966270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.966313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.966617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.966661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.966939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.966982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.967102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.967144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.967416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.967459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.967676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.967718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.967972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.968014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.968251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.968294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.968511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.968552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.968752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.968795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.968987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.969028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.969166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.969218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.969420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.969463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.969669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.969710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.970009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.970051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.970212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.970254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.970537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.970580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.970770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.970812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.971018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.971291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.514 [2024-11-06 15:43:53.971334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.514 qpair failed and we were unable to recover it. 00:39:26.514 [2024-11-06 15:43:53.971528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.971570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.971806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.971854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.972054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.972097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.972293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.972335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.972548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.972590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.972807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.972849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.973057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.973099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.973361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.973405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.973663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.973705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.973837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.973880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.974005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.974045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.974244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.974285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.974539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.974578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.974863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.974906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.975038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.975078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.975271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.975313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.975597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.975640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.975836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.975878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.976067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.976108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.976316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.976359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.976563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.976604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.976814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.976856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.977134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.977177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.977449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.977491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.977715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.977757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.977891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.977933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.978137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.978178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.978375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.978642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.978683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.978891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.978932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.979122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.979164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.979318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.979362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.979561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.515 [2024-11-06 15:43:53.979604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.515 qpair failed and we were unable to recover it. 00:39:26.515 [2024-11-06 15:43:53.979903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.979946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.980144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.980186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.980406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.980449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.980636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.980679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.980960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.981002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.981200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.981252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.981514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.981556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.981724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.981993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.982043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.982183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.982238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.982523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.982564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.982718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.982760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.982966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.983009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.983223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.983266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.983401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.983631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.983672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.983792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.983833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.983973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.984015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.984297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.984340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.984528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.984571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.984873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.985103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.985143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.985368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.985681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.985724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.985918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.986210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.986253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.986392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.986433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.986699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.986741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.986873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.986915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.987153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.987197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.987462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.987504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.987760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.987802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.987990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.988032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.988312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.988355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.988565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.988607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.988818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.988861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.989067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.989109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.989329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.516 [2024-11-06 15:43:53.989374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.516 qpair failed and we were unable to recover it. 00:39:26.516 [2024-11-06 15:43:53.989658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.989700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.989844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.989888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.990163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.990212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.990466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.990509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.990787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.990829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.990978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.991021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.991301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.991346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.991544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.991586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.991722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.991764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.991992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.992035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.992168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.992229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.992432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.992475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.992708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.992751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.992967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.993164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.993214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.993425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.993468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.993673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.993714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.993902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.993944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.994177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.994441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.994605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.994648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.994939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.994982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.995195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.995249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.995484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.995527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.995669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.995711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.995909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.995951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.996260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.996399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.996441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.996647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.996690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.996884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.997155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.997198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.997422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.997464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.997659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.997701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.997838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.997879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.998221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.998448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.998702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.998743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.998947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.517 [2024-11-06 15:43:53.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.517 qpair failed and we were unable to recover it. 00:39:26.517 [2024-11-06 15:43:53.999179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:53.999232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:53.999464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:53.999506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:53.999722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:53.999764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:53.999968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.000009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.000241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.000286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.000572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.000615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.000839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.000882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.001116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.001329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.001373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.001589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.001632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.002119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.002169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.002468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.002514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.002679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.002731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.002990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.003288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.003333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.003604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.003645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.003862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.003905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.004049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.004092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.004302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.004347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.004547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.004798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.004840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.004983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.005026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.005256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.005299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.005580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.005623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.005905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.005947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.006104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.006147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.006364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.006412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.006540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.006582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.006725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.006767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.007011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.007222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.007266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.007484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.007527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.007763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.007806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.007994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.008036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.008241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.008286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.008490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.008534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.008671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.008714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.008850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.518 [2024-11-06 15:43:54.009038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.518 [2024-11-06 15:43:54.009080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.518 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.009230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.009274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.009481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.009524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.009741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.009783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.010041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.010083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.010321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.010366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.010509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.010704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.010745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.010879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.010921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.011108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.011151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.011372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.011416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.011606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.011648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.011904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.011947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.012157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.012199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.012416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.012464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.012620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.012662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.012920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.012963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.013250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.013295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.013520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.013563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.013822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.013865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.014076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.014119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.014315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.014360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.014567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.014611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.014833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.014876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.015109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.015151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.015301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.015607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.015651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.015808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.015850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.016068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.016112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.016391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.016436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.016696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.016739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.016950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.016993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.017271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.017318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.017532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.017706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.519 [2024-11-06 15:43:54.017749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.519 qpair failed and we were unable to recover it. 00:39:26.519 [2024-11-06 15:43:54.017951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.017993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.018280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.018326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.018622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.018666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.018872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.019182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.019431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.019653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.019697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.019896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.019940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.020148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.020194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.020448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.020593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.020637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.020855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.020896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.021149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.021189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.021463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.021507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.021732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.021774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.021975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.022016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.022151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.022194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.022496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.022537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.022747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.022790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.023053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.023102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.023300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.023344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.023558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.023601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.023796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.023837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.024030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.024071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.024285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.024328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.024542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.024584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.024783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.024824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.024961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.025004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.025218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.025261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.025417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.025458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.025678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.025718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.026026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.026068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.026218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.026261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.026387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.026427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.026741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.026974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.027015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.027225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.520 [2024-11-06 15:43:54.027269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.520 qpair failed and we were unable to recover it. 00:39:26.520 [2024-11-06 15:43:54.027491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.027533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.027792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.027833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.027966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.028008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.028266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.028390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.028430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.028635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.028675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.028887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.028926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.029124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.029164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.029380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.029422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.029685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.029726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.029943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.029984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.030117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.030159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.030382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.030607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.030737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.030986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.031028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.031310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.031354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.031543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.031584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.031838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.031881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.032020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.032062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.032262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.032305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.032499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.032540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.032658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.032704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.032961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.033002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.033228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.033270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.033476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.033517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.033668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.033709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.033984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.034025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.034224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.034268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.034499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.034540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.034725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.034767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.034965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.035007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.035223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.035267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.035467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.035509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.035778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.035819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.036054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.036095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.036292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.036336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.036553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.521 [2024-11-06 15:43:54.036595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.521 qpair failed and we were unable to recover it. 00:39:26.521 [2024-11-06 15:43:54.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.036892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.037168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.037217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.037372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.037414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.037539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.037580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.037774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.037816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.038016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.038058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.038217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.038260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.038456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.038496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.038761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.038803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.038999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.039039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.039227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.039270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.039513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.039557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.039694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.039737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.040008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.040050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.040335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.040380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.040590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.040633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.040824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.040866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.041079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.041121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.041368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.041614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.041657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.041849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.041891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.042169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.042218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.042361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.042403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.042612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.042655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.042917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.042966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.043243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.043286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.043483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.043525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.043786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.043828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.044080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.044123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.044328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.044373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.044582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.044624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.044822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.044864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.045068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.045111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.045390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.045435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.045620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.045848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.045890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.046101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.046288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.046331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.522 qpair failed and we were unable to recover it. 00:39:26.522 [2024-11-06 15:43:54.046598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.522 [2024-11-06 15:43:54.046641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.046975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.047018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.047335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.047378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.047663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.047857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.047901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.048052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.048094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.048286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.048331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.048454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.048495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.048753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.048794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.048926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.048967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.049168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.049219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.049479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.049521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.049662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.049704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.050035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.050123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.050410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.050463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.050679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.050725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.051020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.051373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.051419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.051622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.051666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.051858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.051900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.052105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.052148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.052402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.052606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.052648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.052856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.052899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.053045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.053089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.053378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.053423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.053655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.053706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.053911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.053954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.054104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.054148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.054367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.054410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.054617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.054661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.054850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.054893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.055183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.055239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.055443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.055485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.055663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.055856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.055900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.056123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.056165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.056470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.056515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-11-06 15:43:54.056680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.523 [2024-11-06 15:43:54.056724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.057003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.057046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.057253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.057459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.057503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.057761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.057803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.058015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.058059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.058328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.058373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.058656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.058699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.058957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.059008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.059244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.059288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.059565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.059609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.059865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.059908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.060187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.060245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.060445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.060488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.060765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.060969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.061012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.061155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.061198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.061490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.061534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.061741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.061784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.061927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.061970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.062169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.062436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.062479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.062707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.062750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.062887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.062944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.063151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.063195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.063410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.063454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.063671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.063714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.063976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.064018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.064249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.064300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.064566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.064610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.064766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.064809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.064940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.064982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.065125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.065168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.065402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.524 [2024-11-06 15:43:54.065445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-11-06 15:43:54.065804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.066107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.066149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.066361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.066406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.066543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.066584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.066900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.066945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.067148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.067190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.067333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.067376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.067613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.067656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.067807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.067852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.067987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.068031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.068310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.068355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.068581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.068623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.068793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.068837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.069052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.069094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.069351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.069395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.069603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.069645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.069854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.069898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.070214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.070256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.070493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.070766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.070809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.071095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.071138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.071405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.071450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.071708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.071752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.071957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.071999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.072313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.072359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.072567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.072610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.072896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.072939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.073196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.073250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.073478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.073522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.073726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.073770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.073971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.074015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.074234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.074278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.074485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.074529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.074761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.074804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.075006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.075059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.075286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.075551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.075594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-11-06 15:43:54.075803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.525 [2024-11-06 15:43:54.075847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.075997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.076321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.076366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.076586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.076629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.076916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.076960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.077241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.077286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.077519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.077562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.077847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.077890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.078160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.078212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.078421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.078465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.078693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.078736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.078951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.078994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.079269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.079313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.079512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.079554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.079699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.079742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.079953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.079995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.080212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.080277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.080411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.080647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.080688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.080921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.080965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.081172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.081226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.081499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.081541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.081821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.081864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.082070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.082113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.082383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.082428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.082686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.082729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.083042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.083084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.083327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.083370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.083635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.083677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.083973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.084016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.084275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.084482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.084525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.084834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.084877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.085073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.085115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.085247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.085292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.085587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.085631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.085839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.086160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.086214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.086466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.526 [2024-11-06 15:43:54.086509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.526 qpair failed and we were unable to recover it. 00:39:26.526 [2024-11-06 15:43:54.086718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.086761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.086912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.086954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.087145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.087187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.087357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.087401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.087696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.087740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.088042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.088228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.088460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.088503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.088658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.088701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.088911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.088954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.089156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.089477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.089521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.089818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.089861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.090120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.090164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.090306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.090350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.090632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.090675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.090827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.090871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.091096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.091139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.091287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.091331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.091618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.091660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.091875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.091920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.092127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.092169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.092390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.092434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.092639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.092682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.092906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.092950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.093138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.093185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.093509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.093560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.093863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.093906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.094124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.094168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.094453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.094496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.094687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.094730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.095328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.095373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.095534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.095577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.095866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.096145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.096188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.096458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.096501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.096729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.527 [2024-11-06 15:43:54.096773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.527 qpair failed and we were unable to recover it. 00:39:26.527 [2024-11-06 15:43:54.097046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.097089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.097357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.097402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.097602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.097645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.097919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.097974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.098283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.098327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.098536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.098579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.098840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.098884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.099158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.099200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.099419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.099462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.099748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.099791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.100035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.100079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.100348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.100392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.100647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.100689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.100825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.100868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.101150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.101193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.101499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.101544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.101751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.101794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.102114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.102263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.102307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.102615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.102658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.102808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.102851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.103110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.103151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.103467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.103511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.103675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.103719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.103948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.103992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.104291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.104335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.104526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.104569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.104704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.104753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.104956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.104998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.105215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.105259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.105448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.105491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.105779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.105822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.106078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.106120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.106332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.106376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.106519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.106561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.106823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.106866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.107069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.107112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.107342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.107385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.528 qpair failed and we were unable to recover it. 00:39:26.528 [2024-11-06 15:43:54.107531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.528 [2024-11-06 15:43:54.107576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.107808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.107851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.108061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.108106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.108438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.108598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.108640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.108843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.108888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.109099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.109143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.109406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.109557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.109600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.109858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.109901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.110037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.110079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.110363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.110408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.110569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.110612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.110826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.110869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.111027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.111070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.111271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.111316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.111473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.111798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.111841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.112121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.112165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.112386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.112430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.112624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.112669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.112886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.112929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.113156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.113212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.113472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.113515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.113760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.114023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.114067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.114340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.114385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.114534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.114577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.529 [2024-11-06 15:43:54.114793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.529 [2024-11-06 15:43:54.114837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.529 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.115141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.115213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.115342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.115386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.115686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.115729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.115883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.115926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.116133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.116177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.116351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.806 [2024-11-06 15:43:54.116395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.806 qpair failed and we were unable to recover it. 00:39:26.806 [2024-11-06 15:43:54.116554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.116595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.116799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.116973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.117016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.117233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.117507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.117550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.117744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.117786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.117977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.118021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.118286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.118451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.118493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.118662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.118705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.118988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.119032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.119247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.119292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.119555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.119598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.119828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.119870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.120023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.120065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.120215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.120259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.120511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.120649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.120691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.120826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.120867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.121067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.121112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.121266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.121310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.121505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.121548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.121835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.121878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.122170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.122222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.122378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.122564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.122806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.122850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.123073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.123117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.123328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.123373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.123501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.123543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.123698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.123950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.123994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.124265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.124409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.807 [2024-11-06 15:43:54.124452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.807 qpair failed and we were unable to recover it. 00:39:26.807 [2024-11-06 15:43:54.124683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.124733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.124876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.124918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.125117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.125376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.125421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.125578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.125619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.125774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.125816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.126020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.126063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.126337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.126382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.126514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.126564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.126709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.126753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.126940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.126982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.127193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.127250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.127457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.127501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.127716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.127758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.127960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.128003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.128224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.128268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.128418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.128460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.128671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.128714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.128922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.128965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.129193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.129248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.129456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.129498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.129687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.129928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.129972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.130227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.130275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.130490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.130546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.130685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.130728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.130924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.130967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.131249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.131295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.131434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.131477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.131686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.131731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.131959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.132001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.132194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.132250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.132493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.808 [2024-11-06 15:43:54.132536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.808 qpair failed and we were unable to recover it. 00:39:26.808 [2024-11-06 15:43:54.132829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.132874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.133082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.133125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.133270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.133438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.133481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.133734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.133776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.134072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.134117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.134373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.134418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.134575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.134625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.134831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.134875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.135012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.135054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.135292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.135488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.135531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.135792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.135834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.136025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.136068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.136285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.136329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.136524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.136564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.136701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.136743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.136876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.136918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.137050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.137287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.137331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.137509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.137718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.137764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.137982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.138231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.138277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.138486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.138530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.138813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.138855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.138993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.139037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.139256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.139300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.139456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.139501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.139780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.139821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.140024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.140067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.140212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.140470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.140515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.140780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.140822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.141046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.809 [2024-11-06 15:43:54.141090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.809 qpair failed and we were unable to recover it. 00:39:26.809 [2024-11-06 15:43:54.141400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.141443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.141588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.141632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.141781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.141825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.141967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.142008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.142296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.142342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.142530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.142574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.142711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.142947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.142989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.143275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.143320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.143463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.143506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.143640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.143685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.143975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.144018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.144159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.144220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.144356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.144398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.144686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.144729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.144877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.144919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.145190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.145244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.145396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.145439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.145745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.145789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.145941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.145984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.146273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.146329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.146633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.146677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.146797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.146842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.147051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.147298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.147343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.147550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.147593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.147812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.147856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.148093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.148316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.148362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.148570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.148614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.148923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.149126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.149169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.149305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.149349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.149490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.149531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.149732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.149776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.150001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.150045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.150364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.810 [2024-11-06 15:43:54.150557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.810 [2024-11-06 15:43:54.150599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.810 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.150748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.150790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.150993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.151238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.151284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.151528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.151676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.151719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.151846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.152130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.152172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.152378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.152422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.152677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.152720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.152844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.152885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.153194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.153415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.153457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.153652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.153696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.153926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.153968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.154188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.154250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.154456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.154501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.154757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.154800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.155040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.155170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.155224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.155433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.155476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.155676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.155719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.156024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.156067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.156223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.156267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.156488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.156533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.156761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.156804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.157086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.157129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.157351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.157396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.157524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.157568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.157786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.157829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.158107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.811 [2024-11-06 15:43:54.158152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.811 qpair failed and we were unable to recover it. 00:39:26.811 [2024-11-06 15:43:54.158321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.158372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.158585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.158628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.158816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.158861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.159124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.159167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.159481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.159526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.159663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.159706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.159959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.160002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.160216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.160263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.160556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.160600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.160742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.160786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.160985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.161027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.161244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.161290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.161546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.161589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.161872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.161915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.162217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.162261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.162587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.162631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.162829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.163077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.163121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.163279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.163323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.163542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.163585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.163720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.163906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.163948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.164164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.164216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.164501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.164546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.164747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.164798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.164946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.164988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.165190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.165254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.165461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.165504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.165635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.165679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.165935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.165979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.166107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.166150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.166359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.166405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.812 qpair failed and we were unable to recover it. 00:39:26.812 [2024-11-06 15:43:54.166553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.812 [2024-11-06 15:43:54.166597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.166794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.166837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.167030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.167073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.167299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.167343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.167479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.167523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.167713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.167757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.167956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.168000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.168213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.168259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.168404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.168447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.168651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.168694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.168896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.168940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.169089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.169132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.169423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.169467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.169619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.169663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.169841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.169985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.170028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.170250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.170295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.170431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.170475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.170682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.170725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.170858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.170902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.171162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.171364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.171408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.171607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.171651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.171870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.171915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.172054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.172096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.172295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.172342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.172554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.172597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.172796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.172839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.172998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.173040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.173343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.173387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.173654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.173698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.173898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.173941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.174224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.174275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.174583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.174626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.174875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.174920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.175075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.813 [2024-11-06 15:43:54.175117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.813 qpair failed and we were unable to recover it. 00:39:26.813 [2024-11-06 15:43:54.175374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.175419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.175645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.175688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.175847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.176169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.176223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.176440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.176483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.176692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.176734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.176992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.177035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.177250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.177295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.177587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.177630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.177790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.177833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.178104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.178149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.178422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.178707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.178762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.178969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.179232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.179275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.179599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.179755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.179798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.180020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.180063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.180256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.180301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.180493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.180663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.180705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.180905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.180949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.181232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.181277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.181533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.181578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.181721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.181904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.181946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.182104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.182148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.182279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.182323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.182536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.182579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.182845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.182888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.183133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.183175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.183384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.183428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.183584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.183628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.183848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.183892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.184106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.184150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.184370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.184414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.814 qpair failed and we were unable to recover it. 00:39:26.814 [2024-11-06 15:43:54.184622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.814 [2024-11-06 15:43:54.184671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.184866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.184910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.185111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.185154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.185376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.185419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.185551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.185595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.185804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.185847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.186044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.186086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.186219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.186263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.186451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.186493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.186752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.186795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.186947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.186988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.187217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.187263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.187531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.187575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.187710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.187752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.188034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.188077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.188269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.188314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.188588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.188632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.188844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.188887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.189094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.189137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.189407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.189609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.189654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.189855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.189899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.190101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.190143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.190357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.190403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.190697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.190741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.190952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.190994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.191257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.191302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.191596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.191639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.191771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.191813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.192036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.192079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.192269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.192314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.192541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.192583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.192774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.192818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.192965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.193009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.193198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.193261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.193493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.815 [2024-11-06 15:43:54.193536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.815 qpair failed and we were unable to recover it. 00:39:26.815 [2024-11-06 15:43:54.193676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.193719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.193863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.193905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.194161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.194212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.194369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.194413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.194645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.194705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.194997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.195041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.195299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.195344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.195549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.195592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.195797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.195841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.196034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.196077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.196356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.196401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.196631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.196674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.197215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.197404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.197592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.197974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.198190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.198243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.198543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.198586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.198778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.198821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.199076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.199119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.199312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.199356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.199485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.199528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.199668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.199711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.199833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.199878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.200136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.200179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.200395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.200440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.200719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.200764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.201071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.201114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.201261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.201307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.201513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.816 [2024-11-06 15:43:54.201557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.816 qpair failed and we were unable to recover it. 00:39:26.816 [2024-11-06 15:43:54.201764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.201808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.202087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.202130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.202336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.202381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.202580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.202622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.202827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.202869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.203079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.203387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.203434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.203590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.203844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.203889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.204095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.204138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.204362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.204406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.204595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.204636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.204779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.204829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.205124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.205168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.205378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.205423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.205624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.205668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.205800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.205842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.205981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.206025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.206144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.206187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.206455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.206499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.206716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.206759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.206972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.207014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.207227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.207271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.207545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.207588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.207795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.207837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.208044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.208087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.208225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.208268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.208730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.208773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.208962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.209006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.209146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.209189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.209506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.209550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.209739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.209784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.209927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.209970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.210164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.817 qpair failed and we were unable to recover it. 00:39:26.817 [2024-11-06 15:43:54.210373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.817 [2024-11-06 15:43:54.210422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.210637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.210692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.211009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.211226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.211270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.211422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.211466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.211668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.211710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.212011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.212221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.212264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.212476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.212520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.212672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.212714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.212928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.212972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.213195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.213481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.213525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.213711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.213753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.213955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.213998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.214221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.214266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.214395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.214438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.214638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.214682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.214889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.214931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.215119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.215352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.215397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.215660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.215704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.215842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.215885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.216040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.216084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.216348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.216394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.216602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.216925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.217111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.217154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.217417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.217461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.217611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.217655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.217790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.217833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.218058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.218101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.218378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.218575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.218618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.218897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.818 [2024-11-06 15:43:54.218942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.818 qpair failed and we were unable to recover it. 00:39:26.818 [2024-11-06 15:43:54.219074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.219116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.219383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.219428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.219635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.219676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.219871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.219915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.220098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.220140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.220427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.220471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.220664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.220709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.220840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.220882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.221147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.221192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.221344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.221598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.221642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.221901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.221944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.222214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.222260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.222517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.222561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.222781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.222824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.223033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.223075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.223275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.223319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.223451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.223494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.223727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.223770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.224071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.224269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.224313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.224540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.224582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.224731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.224773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.224992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.225035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.225242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.225287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.225478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.225519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.225728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.225771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.225915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.225957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.226160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.226213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.226493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.226537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.226736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.226796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.227024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.227069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.227221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.227264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.227497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.227705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.227747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.227888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.227930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.819 [2024-11-06 15:43:54.228124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.819 [2024-11-06 15:43:54.228166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.819 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.228451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.228495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.228684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.228726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.228876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.228918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.229230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.229475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.229518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.229785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.229827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.230025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.230068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.230226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.230271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.230466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.230508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.230736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.230778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.231067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.231111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.231239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.231283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.231433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.231483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.231624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.231668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.231856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.231900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.232099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.232141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.232288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.232332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.232553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.232596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.232807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.232849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.233045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.233088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.233313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.233358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.233504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.233548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.233760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.233804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.234003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.234047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.234285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.234330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.234475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.234517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.234780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.234823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.235027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.235070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.235296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.235342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.235542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.235584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.235856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.235899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.236113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.236155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.236374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.820 [2024-11-06 15:43:54.236418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.820 qpair failed and we were unable to recover it. 00:39:26.820 [2024-11-06 15:43:54.236622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.236664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.236895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.236939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.237080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.237127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.237459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.237615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.237659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.237884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.237927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.238266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.238312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.238502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.238545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.238750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.238950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.238992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.239181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.239233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.239439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.239482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.239736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.239780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.239993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.240035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.240241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.240475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.240517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.240732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.240775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.240983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.241027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.241223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.241268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.241472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.241521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.241750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.242014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.242055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.242262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.242307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.242529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.242574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.242763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.242817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.243075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.243119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.243317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.243361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.243616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.243660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.243920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.244139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.244371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.821 [2024-11-06 15:43:54.244416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.821 qpair failed and we were unable to recover it. 00:39:26.821 [2024-11-06 15:43:54.244621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.244664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.244810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.244853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.245078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.245122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.245373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.245667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.245711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.245917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.245960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.246227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.246271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.246400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.246442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.246584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.246626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.246840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.246883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.247177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.247231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.247494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.247536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.247749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.247794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.247992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.248113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.248156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.248473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.248696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.248739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.248862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.248904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.249161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.249222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.249417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.249462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.249650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.249693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.249896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.250056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.250100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.250325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.250538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.250582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.250866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.250910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.251165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.251217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.251360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.251402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.251665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.251933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.251976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.252119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.252162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.252311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.252355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.252609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.252652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.822 [2024-11-06 15:43:54.252847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.822 [2024-11-06 15:43:54.252890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.822 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.253118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.253162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.253461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.253507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.253658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.253701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.253925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.253968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.254091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.254135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.254428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.254473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.254743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.254787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.255020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.255063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.255381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.255645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.255687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.255939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.256098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.256140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.256349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.256394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.256628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.256671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.256864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.256906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.257145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.257187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.257488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.257532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.257672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.257714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.257851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.257896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.258188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.258240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.258447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.258491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.258638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.258682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.258896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.258940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.259091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.259182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.259400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.259443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.259723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.259765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.259954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.259999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.260227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.260271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.260466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.260510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.260707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.260750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.260949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.260992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.261135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.261177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.261460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.261654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.261697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.261923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.261973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.823 qpair failed and we were unable to recover it. 00:39:26.823 [2024-11-06 15:43:54.262165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.823 [2024-11-06 15:43:54.262222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.262441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.262484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.262699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.262742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.263002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.263045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.263240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.263284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.263476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.263519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.263800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.263843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.263980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.264023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.264156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.264200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.264349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.264391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.264716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.264759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.264967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.265244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.265288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.265502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.265546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.265743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.265786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.265987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.266029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.266320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.266364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.266555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.266598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.266748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.266998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.267173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.267226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.267433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.267476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.267645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.267690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.267883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.267925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.268118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.268160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.268384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.268428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.268588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.268631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.268835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.268879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.269181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.269244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.269436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.824 [2024-11-06 15:43:54.269479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.824 qpair failed and we were unable to recover it. 00:39:26.824 [2024-11-06 15:43:54.269628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.269672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.269810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.269854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.270044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.270088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.270246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.270289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.270482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.270527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.270663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.270707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.270915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.270959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.271096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.271139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.271350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.271396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.271656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.271707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.271964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.272006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.272266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.272311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.272458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.272501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.272638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.272680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.272937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.272980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.273124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.273167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.273310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.273633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.273677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.273885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.273927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.274059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.274102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.274371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.274416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.274632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.274676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.274874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.274927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.275155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.275200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.275401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.275444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.275652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.275697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.275901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.275943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.276139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.276183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.276402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.276447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.276757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.276799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.277004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.277047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.825 [2024-11-06 15:43:54.277253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.825 [2024-11-06 15:43:54.277298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.825 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.277492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.277535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.277750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.277791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.278013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.278057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.278349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.278395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.278610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.278652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.278789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.278832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.279087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.279130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.279370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.279413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.279620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.279665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.279798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.279841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.280049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.280235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.280278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.280487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.280530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.280668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.280711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.280920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.280965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.281107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.281149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.281383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.281427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.281564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.281620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.281756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.281801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.281992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.282034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.282235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.282280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.282477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.282519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.282711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.282755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.282998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.283041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.283235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.283279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.283471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.283605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.283648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.283860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.283904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.284180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.284235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.284440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.284482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.826 [2024-11-06 15:43:54.284700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.826 [2024-11-06 15:43:54.284743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.826 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.284948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.284992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.285136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.285180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.285390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.285435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.285648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.285693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.285843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.285885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.286099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.286142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.286285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.286331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.286592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.286635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.286803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.286847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.287129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.287172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.287334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.287378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.287588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.287632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.287825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.287869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.288006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.288050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.288252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.288296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.288488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.288532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.288677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.288720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.288922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.288963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.289098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.289143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.289310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.289355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.289492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.289536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.289823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.289864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.290118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.290164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.290317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.290373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.290578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.290620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.290829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.290871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.291026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.291077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.291291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.291335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.291568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.291610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.291830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.291874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.292088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.292131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.827 [2024-11-06 15:43:54.292349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.827 [2024-11-06 15:43:54.292393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.827 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.292587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.292630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.292824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.293101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.293143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.293359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.293403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.293614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.293657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.293855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.293898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.294037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.294080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.294284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.294329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.294589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.294633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.294765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.294807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.295047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.295300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.295348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.295550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.295593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.295822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.295865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.296006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.296049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.296239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.296493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.296536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.296736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.296779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.296988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.297030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.297337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.297385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.297590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.297634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.297797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.297841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.297991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.298034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.298292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.298338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.298532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.298874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.299016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.299059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.299249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.299294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.299448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.299490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.299701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.299745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.300068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.300262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.300306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.300573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.300617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.828 qpair failed and we were unable to recover it. 00:39:26.828 [2024-11-06 15:43:54.300757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.828 [2024-11-06 15:43:54.300801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.300997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.301046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.301238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.301283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.301474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.301518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.301653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.301697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.301955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.301996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.302194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.302252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.302467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.302509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.302634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.302678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.302893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.302935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.303148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.303192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.303460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.303504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.303690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.303934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.304234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.304279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.304500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.304545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.304825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.304868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.305016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.305060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.305274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.305319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.305698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.305743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.305876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.305919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.306176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.306391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.306436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.306645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.306687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.306894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.306938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.307076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.307118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.307262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.307306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.307476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.307521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.307831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.307874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.308073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.308264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.308309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.829 [2024-11-06 15:43:54.308509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.829 [2024-11-06 15:43:54.308552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.829 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.308781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.308823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.309023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.309066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.309217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.309261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.309425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.309468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.309843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.309887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.310022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.310064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.310200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.310256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.310449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.310497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.310701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.310745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.310958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.311000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.311137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.311182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.311403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.311446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.311572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.311616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.311902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.311946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.312091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.312376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.312420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.312629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.312671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.312894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.312937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.313125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.313170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.313313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.313355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.313481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.313525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.313744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.313788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.313927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.313971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.314190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.314264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.314392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.314435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.314669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.314712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.314916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.314959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.315098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.315141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.315305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.315351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.315570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.315614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.315871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.315914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.316111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.316157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.316387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.830 [2024-11-06 15:43:54.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.830 qpair failed and we were unable to recover it. 00:39:26.830 [2024-11-06 15:43:54.316571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.316614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.316876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.316971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.317154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.317223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.317373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.317421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.317593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.317641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.317840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.317884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.318095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.318139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.318411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.318458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.318620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.318669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.318956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.319002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.319141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.319187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.319478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.319523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.319661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.319706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.320006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.320068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.320212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.320266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.320428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.320474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.320766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.320811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.321010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.321054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.321251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.321311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.321513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.321556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.321758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.321803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.322088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.322131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.322334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.322377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.322514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.322561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.322780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.322826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.323107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.323151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.323369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.323415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.323670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.323714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.323879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.323924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.324134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.324178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.831 qpair failed and we were unable to recover it. 00:39:26.831 [2024-11-06 15:43:54.324394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.831 [2024-11-06 15:43:54.324438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.324576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.324619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.324833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.324877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.325038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.325236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.325282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.325418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.325462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.325659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.325704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.325892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.325937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.326197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.326254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.326397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.326440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.326651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.326696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.326965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.327086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.327129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.327349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.327395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.327558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.327603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.327746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.327791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.328047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.328091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.328317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.328634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.328680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.328821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.328865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.329110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.329340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.329385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.329594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.329639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.329853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.329899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.330095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.330147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.330347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.330393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.330621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.330666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.330945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.330989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.331183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.832 [2024-11-06 15:43:54.331247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.832 qpair failed and we were unable to recover it. 00:39:26.832 [2024-11-06 15:43:54.331446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.331491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.331714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.331759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.331950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.331993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.332215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.332260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.332417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.332619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.332668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.332952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.333001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.333230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.333289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.333448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.333491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.333692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.333738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.333942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.333986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.334213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.334263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.334552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.334608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.334867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.334911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.335035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.335078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.335224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.335269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.335475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.335519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.335664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.335717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.335992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.336052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.336291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.336339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.336535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.336579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.336867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.833 [2024-11-06 15:43:54.336911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.833 qpair failed and we were unable to recover it. 00:39:26.833 [2024-11-06 15:43:54.337103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.337146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.337301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.337357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.337546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.337605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.337823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.337869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.338157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.338371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.338415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.338722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.338772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.338997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.339046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.339221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.339277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.339470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.339514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.339667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.339710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.339913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.339958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.340121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.340165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.340297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.340349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.340548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.340595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.340741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.340792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.341006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.341052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.341263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.341506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.341549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.341698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.341747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.341893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.341947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.342155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.342198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.342359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.342403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.342672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.342716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.342980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.343024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.343151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.343194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.343414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.343466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.343775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.343819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.343976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.344022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.834 qpair failed and we were unable to recover it. 00:39:26.834 [2024-11-06 15:43:54.344228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.834 [2024-11-06 15:43:54.344273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.344409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.344451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.344679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.344733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.344972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.345017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.345281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.345329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.345520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.345564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.345714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.345759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.345944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.345987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.346134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.346179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.346390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.346433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.346672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.346716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.346937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.346987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.347225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.347268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.347483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.347528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.347738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.347909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.347952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.348231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.348276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.348472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.348516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.348753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.348797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.348940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.348983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.349249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.349309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.349526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.349570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.349808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.349853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.350060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.835 [2024-11-06 15:43:54.350104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.835 qpair failed and we were unable to recover it. 00:39:26.835 [2024-11-06 15:43:54.350265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.350311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.350596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.350641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.350809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.350853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.351130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.351174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.351384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.351428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.351637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.351799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.351986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.352041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.352251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.352505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.352549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.352761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.352806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.352999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.353043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.353244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.353300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.353562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.353606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.353842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.353888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.354161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.354215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.354481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.354766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.354814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.355065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.355324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.355371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.355586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.355629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.355881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.356086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.356131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.356414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.356462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.836 [2024-11-06 15:43:54.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.836 [2024-11-06 15:43:54.356727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.836 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.356875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.357118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.357163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.357451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.357504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.357765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.357815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.357971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.358027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.358295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.358521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.358756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.359042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.359093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.359303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.359349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.359571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.359616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.359819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.359861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.360116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.360489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.360543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.360692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.360736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.360954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.360999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.361200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.361272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.361591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.361635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.361796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.361847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.362051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.362095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.362249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.362296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.362545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.362590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.362802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.362847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.363065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.363113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.363246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.363305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.837 qpair failed and we were unable to recover it. 00:39:26.837 [2024-11-06 15:43:54.363605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.837 [2024-11-06 15:43:54.363652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.363854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.363897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.364045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.364088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.364224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.364269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.364575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.364624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.364778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.365031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.365078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.365291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.365337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.365475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.365522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.365778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.365827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.366100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.366147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.366365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.366411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.366614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.366658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.366862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.366906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.367111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.367155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.367358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.367403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.367641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.367687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.367879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.368066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.368110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.368412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.368614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.368666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.368817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.368878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.369080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.369124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.369405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.369449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.369643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.369685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.369884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.369927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.370186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.370238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.370519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.370562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.370772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.370814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.371029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.371072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.371277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.371321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.371519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.371562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.371696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.371739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.372008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.372050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.372250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.372295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.372509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.372551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.372756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.372799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.373013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.373057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.373290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.373335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.838 qpair failed and we were unable to recover it. 00:39:26.838 [2024-11-06 15:43:54.373527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.838 [2024-11-06 15:43:54.373571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.373776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.373820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.374021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.374064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.374306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.374446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.374489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.374617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.374660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.374863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.374906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.375036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.375079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.375274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.375319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.375542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.375586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.375726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.375777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.375994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.376038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.376241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.376287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.376492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.376535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.376741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.376783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.376971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.377013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.377214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.377257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.377419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.377462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.377607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.377658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.377867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.377911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.378038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.378082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.378275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.378321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.378601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.378846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.378890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.379117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.379161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.379360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.379405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.379589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.379632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.379777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.379820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.380074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.380117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.380320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.380366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.380556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.380599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.380862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.380906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.381049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.381092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.381282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.381327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.381455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.381498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.381710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.381752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.381956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.381999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.382132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.382175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.382381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.382425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.382547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.382590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.382894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.383135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.383178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.383413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.383458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.383617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.383660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.383938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.383981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.384240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.384288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.384438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.384494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.384647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.384689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.384871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.385087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.385284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.385327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.385452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.385494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.385657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.385700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.385895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.385938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.386149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.386192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.386399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.839 [2024-11-06 15:43:54.386445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.839 qpair failed and we were unable to recover it. 00:39:26.839 [2024-11-06 15:43:54.386657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.386701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.386894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.387063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.387114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.387388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.387432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.387576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.387618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.387917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.387961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.388152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.388335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.388381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.388528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.388571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.388905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.389062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.389107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.389244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.389289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.389523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.389669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.389713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.389852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.389896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.390088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.390131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.390269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.390314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.390570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.390614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.390787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.390986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.391029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.391246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.391290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.391484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.391526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.391727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.391770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.391984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.392233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.392487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.392531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.392753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.392808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.392963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.393009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.393154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.393195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.393380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.393671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.393717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.393924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.393979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.394138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.394182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.394396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.394441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.394640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.394686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.394974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.395018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.395158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.395218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.395399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.395445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.395661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.395716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.395913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.395956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.396164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.396217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.396501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.396546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.396813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.396867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.397080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.397127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.397396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.397440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.397636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.397680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.397895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.397937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.398092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.398137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.398282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.398615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.398659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.398849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.398893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.399036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.399079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.399342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.399388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.399673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.399718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.399975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.400031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.400293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.840 [2024-11-06 15:43:54.400339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.840 qpair failed and we were unable to recover it. 00:39:26.840 [2024-11-06 15:43:54.400547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.400591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.400790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.400833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.401023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.401066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.401189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.401253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.401532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.401577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.401777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.401819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.401968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.402011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.402133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.402176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.402320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.402363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.402553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.402596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.402746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.402789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.403009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.403051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.403309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.403353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.403492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.403534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.403740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.403785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.403973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.404017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.404272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.404315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.404440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.404483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.404619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.404664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.404856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.404901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.405053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.405096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.405310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.405355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.405615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.405659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.405861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.405904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.406108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.406152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.406329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.406373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.406569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.406620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.406770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.406822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.407015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.407058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.407338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.407383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.407647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.407690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.408011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.408221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.408264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.408468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.408513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.408710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.408752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.408886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.408928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.409149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.409192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.409501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.409546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.409801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.409847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.410150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.410194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.410364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.410408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.410602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.410644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.410852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.410895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.411031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.411072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.411200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.411258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.411487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.411535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.411749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.841 [2024-11-06 15:43:54.411793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.841 qpair failed and we were unable to recover it. 00:39:26.841 [2024-11-06 15:43:54.412002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.412047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.412256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.412302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.412448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.412491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.412708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.412753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.413012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.413056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.413251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.413298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.413513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.413558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.413835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.413879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.414039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.414088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.414231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.414279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.414581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.414729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.414774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.415018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.415061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.415271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.415315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.415485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.415539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.415741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.415801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.416004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.416327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.416372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.416579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.416622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.416811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.416861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.417019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.417066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.417231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.417278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.417506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.417557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.417846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.417890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.418022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.418068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.418222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.418274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.418511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.418559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.418764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.418809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.419021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.419064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.419251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.419294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.419491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.419538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.419801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.419849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.420056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.420101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.420237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.420282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.420493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.420538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.420794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.420844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.421041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.421084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.421247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.421291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.421485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.421528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.421670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.421713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.421922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.421964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.422269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.422313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.422504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.422547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.422758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.422801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.422990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.423032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.423264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.423308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.423510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.423554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.423695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.423737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.423994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.424037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.424176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.424228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.424438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.424481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:26.842 [2024-11-06 15:43:54.424704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.842 [2024-11-06 15:43:54.424747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:26.842 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.424968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.425011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.425293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.425337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.425545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.425587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.425844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.425887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.426051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.426094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.426302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.426346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.426642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.426685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.426884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.426932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.427124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.427166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.427311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.427355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.427482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.427524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.427666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.427718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.427971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.428160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.428216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.428373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.428417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.428697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.428740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.429039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.429188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.429252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.429401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.429443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.429661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.429704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.429909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.429953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.430173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.430228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.430376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.430419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.430688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.430731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.431010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.431052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.431326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.431371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.431658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.115 [2024-11-06 15:43:54.431706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.115 qpair failed and we were unable to recover it. 00:39:27.115 [2024-11-06 15:43:54.431828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.431883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.432129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.432172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.432463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.432506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.432646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.432688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.432832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.432873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.433072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.433113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.433406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.433450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.433588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.433630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.433911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.433953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.434162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.434214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.434423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.434466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.434730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.434771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.434890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.434933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.435073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.435115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.435234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.435278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.435471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.435732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.435775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.435900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.435942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.436086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.436129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.436433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.436701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.436751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.436945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.436986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.437104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.437147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.437429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.437472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.437687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.437729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.437887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.437930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.438084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.438125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.438382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.438425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.438561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.438604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.438811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.438853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.439153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.439196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.439656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.439707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.439922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.116 [2024-11-06 15:43:54.439965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.116 qpair failed and we were unable to recover it. 00:39:27.116 [2024-11-06 15:43:54.440228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.440274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.440535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.440577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.440710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.440752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.440952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.440994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.441114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.441156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.441364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.441407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.441618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.441661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.441909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.442218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.442262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.442468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.442512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.442706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.442749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.442946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.442989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.443309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.443530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.443572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.443868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.443912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.444136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.444483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.444526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.444825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.444866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.445066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.445109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.445305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.445350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.445541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.445583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.445776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.445818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.446124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.446423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.446467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.446721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.446764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.446974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.447019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.447233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.447283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.447499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.447541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.447848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.447892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.448186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.448256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.448546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.448600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.448869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.448913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.449170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.449223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.449435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.117 [2024-11-06 15:43:54.449478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.117 qpair failed and we were unable to recover it. 00:39:27.117 [2024-11-06 15:43:54.449684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.449725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.449929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.449971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.450288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.450333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.450672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.450715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.451011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.451052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.451339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.451383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.451666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.451709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.451907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.451949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.452223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.452267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.452531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.452574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.452826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.452868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.453167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.453219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.453530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.453572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.453777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.453819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.453969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.454012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.454165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.454215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.454447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.454491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.454691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.454733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.455034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.455077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.455343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.455387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.455689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.455732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.455920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.455961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.456275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.456320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.456549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.456591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.456891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.456934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.457222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.457266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.457553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.457596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.457893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.457936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.458187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.458519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.458561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.458842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.458885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.459170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.459223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.459500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.459555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.118 qpair failed and we were unable to recover it. 00:39:27.118 [2024-11-06 15:43:54.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.118 [2024-11-06 15:43:54.459887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.460076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.460119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.460312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.460562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.460605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.460841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.461146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.461189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.461467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.461509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.461774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.461817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.462030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.462073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.462379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.462423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.462708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.462996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.463039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.463295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.463339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.463617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.463920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.463963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.464245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.464290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.464551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.464593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.464866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.464908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.465116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.465160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.465311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.465355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.465632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.465931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.465973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.466312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.466569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.466611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.466871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.466913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.467246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.467478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.467524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.467805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.467859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.468116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.468159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.468486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.468530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.468853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.469042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.469083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.469351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.469397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.469587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.469629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.469908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.469950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.470252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.119 [2024-11-06 15:43:54.470296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.119 qpair failed and we were unable to recover it. 00:39:27.119 [2024-11-06 15:43:54.470499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.470542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.470770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.470814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.471068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.471110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.471318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.471367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.471677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.471719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.472010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.472052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.472287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.472572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.472614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.472902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.472945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.473213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.473256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.473527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.473570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.473774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.473819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.474075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.474117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.474315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.474360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.474611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.474654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.474934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.474976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.475225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.475270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.475559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.475795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.475838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.476135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.476484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.476528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.476793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.476835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.477116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.477157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.477387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.477432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.477742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.477785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.478015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.478292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.478337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.478606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.478649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.478856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.478899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.479154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.479197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.120 qpair failed and we were unable to recover it. 00:39:27.120 [2024-11-06 15:43:54.479502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.120 [2024-11-06 15:43:54.479552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.479817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.479859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.480149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.480192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.480458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.480503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.480781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.480823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.481121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.481335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.481379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.481649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.481692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.482010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.482224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.482269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.482574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.482615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.482833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.482876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.483128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.483172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.483482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.483525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.483740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.483783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.484090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.484134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.484378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.484716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.484759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.485035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.485078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.485302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.485346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.485648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.485690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.485973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.486016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.486238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.486282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.486592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.486889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.486933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.487219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.487275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.487617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.487924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.487968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.488252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.488298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.488529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.488571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.488791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.488834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.489048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.489091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.489370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.489415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.489673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.489715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.489905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.121 [2024-11-06 15:43:54.489948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.121 qpair failed and we were unable to recover it. 00:39:27.121 [2024-11-06 15:43:54.490218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.490262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.490520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.490816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.490859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.491057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.491361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.491406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.491704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.491758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.492043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.492085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.492236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.492281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.492508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.492860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.493194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.493250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.493478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.493522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.493668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.493710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.493998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.494040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.494277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.494619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.494661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.494850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.494892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.495149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.495192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.495410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.495453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.495767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.495811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.496056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.496099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.496377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.496428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.496710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.496754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.496991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.497034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.497249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.497293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.497561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.497806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.497850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.498110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.498152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.498429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.498473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.498786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.498828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.499111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.499431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.499738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.122 [2024-11-06 15:43:54.499998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.122 [2024-11-06 15:43:54.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.122 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.500194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.500261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.500569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.500611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.500893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.500936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.501235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.501288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.501516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.501559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.501862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.501905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.502213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.502257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.502540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.502584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.502884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.502926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.503157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.503199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.503512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.503555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.503780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.503829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.504088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.504492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.504693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.504735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.505116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.505407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.505452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.505733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.505776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.506037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.506079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.506302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.506348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.506548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.506615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.506945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.506988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.507261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.507306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.507618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.507661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.507886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.507929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.508138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.508182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.508483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.508527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.508736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.508779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.509060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.123 [2024-11-06 15:43:54.509102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.123 qpair failed and we were unable to recover it. 00:39:27.123 [2024-11-06 15:43:54.509294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.509339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.509573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.509616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.509903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.509946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.510153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.510196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.510472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.510515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.510825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.511022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.511065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.511326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.511370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.511634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.511677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.511888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.511932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.512224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.512268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.512513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.512558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.512832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.512874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.513165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.513219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.513499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.513542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.513824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.513868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.514149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.514191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.514463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.514507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.514780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.514823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.515026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.515069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.515281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.515326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.515517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.515560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.515845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.515895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.516246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.516555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.516599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.516862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.516905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.517178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.517230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.517440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.517483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.517695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.517739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.517954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.517997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.518215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.518259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.518538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.518582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.518868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.519093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.519140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.124 qpair failed and we were unable to recover it. 00:39:27.124 [2024-11-06 15:43:54.519413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.124 [2024-11-06 15:43:54.519456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.519741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.519784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.519981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.520024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.520329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.520373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.520692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.520955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.520998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.521284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.521328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.521611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.521654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.521939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.521982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.522246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.522290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.522521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.522564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.522785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.522829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.523097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.523139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.523432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.523478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.523621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.523664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.523950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.523993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.524264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.524310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.524579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.524622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.524892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.524935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.525226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.525270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.525550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.525594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.525882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.525973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.526182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.526234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.526568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.526611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.526896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.526938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.527216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.527261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.527533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.527577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.527779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.125 [2024-11-06 15:43:54.527822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.125 qpair failed and we were unable to recover it. 00:39:27.125 [2024-11-06 15:43:54.528107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.528158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.528436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.528480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.528617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.528661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.528946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.528988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.529279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.529324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.529612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.529655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.529942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.529985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.530232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.530276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.530575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.530619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.530925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.530969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.531178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.531231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.531437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.531479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.531781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.532044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.532088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.532391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.532436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.532722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.532765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.533055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.533099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.533381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.533426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.533714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.533757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.533993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.534037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.534345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.534390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.534673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.534717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.534960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.535003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.535235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.535280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.535581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.535624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.535920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.535963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.536173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.536225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.536526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.536570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.536800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.536843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.537155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.537199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.537543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.126 [2024-11-06 15:43:54.537586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.126 qpair failed and we were unable to recover it. 00:39:27.126 [2024-11-06 15:43:54.537910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.537954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.538246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.538291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.538580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.538622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.538928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.538972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.539268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.539315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.539618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.539661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.539927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.539970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.540231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.540276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.540428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.540471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.540769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.540819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.541098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.541140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.541441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.541487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.541771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.541813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.542097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.542140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.542378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.542638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.542995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.543039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.543335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.543380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.543586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.543629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.543906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.543949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.544238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.544283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.544508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.544551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.544817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.544861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.545159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.545225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.545438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.545482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.546009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.546064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.546328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.546374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.546591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.546633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.546906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.546948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.547148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.547191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.127 qpair failed and we were unable to recover it. 00:39:27.127 [2024-11-06 15:43:54.547421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.127 [2024-11-06 15:43:54.547464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.547751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.547795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.548023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.548066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.548306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.548351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.548617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.548660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.548939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.548983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.549292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.549337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.549626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.549668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.549981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.550024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.550253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.550298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.550611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.550653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.550944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.550987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.551292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.551338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.551617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.551661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.551954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.551997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.552225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.552272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.552568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.552611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.552819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.552863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.553083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.553133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.553434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.553479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.553768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.553811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.554040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.554083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.554287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.554332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.554599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.554642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.554968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.555210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.555254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.555482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.555526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.555830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.555873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.556137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.556181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.556413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.556457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.556747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.557117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.557159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.557411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.557457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.557745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.557808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.558085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.558128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.558373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.558419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.128 [2024-11-06 15:43:54.558717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.128 [2024-11-06 15:43:54.558761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.128 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.559091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.559135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.559297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.559343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.559551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.559595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.559831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.559873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.560103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.560147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.560470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.560516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.560807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.560849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.561083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.561126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.561438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.561482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.561712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.561756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.562043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.562086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.562353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.562400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.562692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.563034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.563077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.563306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.563351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.563555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.563597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.563891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.563935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.564153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.564195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.564519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.564563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.564781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.564825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.565049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.565091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.565463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.565518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.565719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.565775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.566034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.566350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.566394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.566615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.566657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.566905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.566947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.567161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.567211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.567412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.567455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.567774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.568101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.568143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.568442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.568486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.568778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.569110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.569152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.569455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.569500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.129 [2024-11-06 15:43:54.569846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.129 qpair failed and we were unable to recover it. 00:39:27.129 [2024-11-06 15:43:54.570118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.570160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.570399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.570443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.570711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.570752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.571056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.571098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.571390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.571434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.571726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.571767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.572009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.572055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.572296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.572339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.572639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.572682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.572964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.573007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.573278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.573322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.573640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.573684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.573960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.574001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.574227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.574587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.574629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.574871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.574914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.575215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.575261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.575471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.575782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.575823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.576022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.576064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.576348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.576392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.576711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.576753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.577081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.577124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.577412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.577456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.577703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.577745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.578026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.578075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.578287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.578331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.578550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.578595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.578835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.578878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.579099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.579141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.579470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.579513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.130 [2024-11-06 15:43:54.579800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.130 [2024-11-06 15:43:54.579844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.130 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.580054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.580384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.580428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.580721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.580763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.581064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.581415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.581459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.581750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.581793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.582108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.582151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.582459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.582503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.582777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.582820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.583110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.583153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.583452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.583496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.583766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.583808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.584085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.584127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.584431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.584474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.584769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.584812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.585037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.585080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.585399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.585444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.585749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.585792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.586060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.586129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.586442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.586486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.586779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.586821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.587022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.587064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.587380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.587424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.587716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.587758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.588051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.588093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.588334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.588378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4109638 Killed "${NVMF_APP[@]}" "$@" 00:39:27.131 [2024-11-06 15:43:54.588673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.588715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.588872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.588913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.589110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.589151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.589430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.589474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:27.131 [2024-11-06 15:43:54.589788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.589831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:27.131 [2024-11-06 15:43:54.590040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.590084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:27.131 [2024-11-06 15:43:54.590337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.590382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:27.131 [2024-11-06 15:43:54.590613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.131 [2024-11-06 15:43:54.590656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.131 qpair failed and we were unable to recover it. 00:39:27.131 [2024-11-06 15:43:54.590873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.590915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.591112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.591155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.591401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.591681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.591724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.592041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.592088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.592413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.592458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.592668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.592711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.593030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.593072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.593294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.593337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.593571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.593614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.593913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.593958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.594173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.594225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.594540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.594584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.594882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.594928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.595227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.595271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.595615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.595938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.595980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.596185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.596241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.596569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.596613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.596883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.596925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.597171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.597223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.597386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.597428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.597718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.597761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.597977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.598026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4110361 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.598250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.598295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4110361 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:27.132 [2024-11-06 15:43:54.598577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.598629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 4110361 ']' 00:39:27.132 [2024-11-06 15:43:54.598947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:27.132 [2024-11-06 15:43:54.599285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.599332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:27.132 [2024-11-06 15:43:54.599650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.599696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:27.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:27.132 [2024-11-06 15:43:54.599966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:27.132 [2024-11-06 15:43:54.600014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.600233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.600279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 15:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.132 [2024-11-06 15:43:54.600433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.132 [2024-11-06 15:43:54.600476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.132 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.600699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.600742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.601021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.601064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.601362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.601406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.601639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.601684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.602012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.602057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.602334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.602377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.602549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.602860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.602903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.603132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.603431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.603477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.603697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.603742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.603961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.604005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.604291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.604337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.604502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.604554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.604848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.604891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.605192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.605250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.605545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.605610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.605927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.605974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.606269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.606315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.606557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.606603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.606842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.606886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.607162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.607214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.607439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.607483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.607704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.607746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.608082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.608125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.608446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.608491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.608716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.608760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.608989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.609032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.609230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.609276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.609486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.609533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.609691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.610001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.610049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.610256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.610303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.610581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.610623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.610789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.133 qpair failed and we were unable to recover it. 00:39:27.133 [2024-11-06 15:43:54.611115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.133 [2024-11-06 15:43:54.611162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.611464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.611512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.611745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.611786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.611999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.612261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.612308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.612540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.612584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.612875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.612931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.613250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.613307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.613546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.613588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.613744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.613785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.613999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.614041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.614284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.614333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.614568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.614616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.614860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.615066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.615109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.615314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.615358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.615494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.615538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.615815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.615856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.616128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.616190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.616555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.616777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.616823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.617110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.617154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.617413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.617464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.617701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.617746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.618064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.618110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.618356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.618403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.618680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.618966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.619011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.619250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.619299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.619520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.619567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.619732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.620002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.620046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.620323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.620696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.620746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.620973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.621018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.621230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.621274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.621500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.134 [2024-11-06 15:43:54.621544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.134 qpair failed and we were unable to recover it. 00:39:27.134 [2024-11-06 15:43:54.621766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.621813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.622024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.622290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.622338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.622500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.622772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.622816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.623045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.623090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.623319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.623364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.623530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.623575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.623828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.623887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.624122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.624168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.624461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.624506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.624654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.624697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.624916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.624960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.625184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.625240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.625531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.625575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.625757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.625905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.625949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.626188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.626254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.626484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.626531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.626674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.626719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.626945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.626989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.627156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.627220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.627362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.627405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.627729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.627891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.627935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.628178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.628235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.628471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.628516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.628675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.628718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.628873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.628921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.629078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.629264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.629309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.629458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.135 [2024-11-06 15:43:54.629504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.135 qpair failed and we were unable to recover it. 00:39:27.135 [2024-11-06 15:43:54.629723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.629770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.629937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.629983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.630138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.630182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.630648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.630694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.630838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.630882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.631233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.631279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.631502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.631547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.631851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.631896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.632224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.632269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.632469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.632515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.632749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.632792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.633108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.633151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.633394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.633457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.633592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.633636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.633886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.633930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.634067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.634116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.634366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.634415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.634637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.634683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.634893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.634953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.635287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.635472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.635518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.635842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.635888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.636182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.636239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.636518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.636571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.636894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.636939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.637151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.637197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.637435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.637481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.637627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.637677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.637904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.638180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.638509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.638556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.638876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.638923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.639151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.639197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.639440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.639493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.639653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.639706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.639935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.639979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.136 [2024-11-06 15:43:54.640300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.136 [2024-11-06 15:43:54.640348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.136 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.640554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.640600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.640780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.640823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.641047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.641093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.641258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.641329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.641625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.641671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.641975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.642023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.642243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.642292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.642528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.642869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.642917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.643162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.643225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.643529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.643574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.643723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.643768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.644044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.644096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.644307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.644355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.644585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.644634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.644785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.644828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.645038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.645085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.645292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.645341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.645643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.645689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.645971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.646215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.646417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.646460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.646684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.646728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.646987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.647036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.647355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.647402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.647534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.647578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.647878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.647924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.648061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.648105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.648316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.648541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.648723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.648767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.648974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.649018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.649171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.649440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.649485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.649624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.649668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.649867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.137 [2024-11-06 15:43:54.649912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.137 qpair failed and we were unable to recover it. 00:39:27.137 [2024-11-06 15:43:54.650128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.650172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.650478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.650523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.650835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.650879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.651162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.651217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.651448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.651722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.651767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.652014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.652063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.652324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.652370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.652593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.652638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.652851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.652903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.653200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.653259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.653488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.653535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.653834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.653879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.654142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.654195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.654453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.654521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.654876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.655199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.655260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.655492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.655538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.655685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.655728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.655946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.655993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.656230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.656424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.656470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.656728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.656774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.657009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.657054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.657213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.657482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.657527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.657834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.657881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.658183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.658256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.658388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.658433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.658571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.658617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.658838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.658882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.659175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.659250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.659477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.659521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.659653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.659698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.659931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.659976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.660190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.660252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.660484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.138 [2024-11-06 15:43:54.660529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.138 qpair failed and we were unable to recover it. 00:39:27.138 [2024-11-06 15:43:54.660769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.660815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.661027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.661071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.661231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.661279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.661504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.661549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.661753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.661804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.662030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.662084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.662243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.662288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.662442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.662485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.662787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.662833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.663012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.663055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.663304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.663349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.663490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.663534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.663856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.663908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.664141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.664187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.664354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.664399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.664615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.664667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.664937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.665217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.665273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.665430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.665474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.665690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.665734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.665876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.665921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.666136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.666182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.666427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.666471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.666709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.666755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.667031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.667077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.667222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.667275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.667445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.667508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.667655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.667699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.667852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.667896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.668254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.668386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.668627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.668673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.668885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.668929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.669230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.669277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.669501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.669544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.139 qpair failed and we were unable to recover it. 00:39:27.139 [2024-11-06 15:43:54.669776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.139 [2024-11-06 15:43:54.669819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.670033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.670079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.670277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.670326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.670549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.670594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.670739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.670784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.670935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.670979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.671144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.671305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.671351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.671582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.671626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.671766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.671809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.672116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.672278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.672323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.672459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.672504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.672732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.672778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.672946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.672990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.673312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.673356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.673604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.673929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.674159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.674215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.674363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.674409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.674697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.674741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.674887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.674951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.675200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.675258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.675414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.675471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.675692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.675963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.676007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.676148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.676193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.676400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.676443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.676659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.676704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.676914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.676965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.677133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.677177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.677427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.677480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.677681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.677725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.140 [2024-11-06 15:43:54.677992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.140 [2024-11-06 15:43:54.678037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.140 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.678253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.678302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.678513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.678556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.678760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.678803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.678950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.678994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.679263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.679310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.679573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.679625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.679886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.679933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.680144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.680188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.680340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.680383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.680657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.680703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.680871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.680914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.681195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.681253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.681385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.681430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.681644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.681691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.681834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.681878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.682112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.682163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.682451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.682500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.682795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.682841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.683034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.683243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.683287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.683601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.683659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.683822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.683866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.684073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.684129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.684357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.684405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.684622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.684666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.684875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.684921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.685130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.685178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.685433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.685476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.685629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.685673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.685824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.685869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.686010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.686056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.686262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.686405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.686450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.686578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.686623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.141 [2024-11-06 15:43:54.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.141 [2024-11-06 15:43:54.686814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.141 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.687016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.687060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.687281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.687329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.687478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.687530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.687765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.687812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.687860] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:27.142 [2024-11-06 15:43:54.687957] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:27.142 [2024-11-06 15:43:54.688076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.688125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.688402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.688446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.688643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.688693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.688894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.688941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.689221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.689269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.689411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.689457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.689652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.689696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.689890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.689934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.690137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.690189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.690494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.690538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.690874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.691153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.691198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.691408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.691696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.691746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.691899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.691957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.692107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.692163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.692468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.692515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.692728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.692773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.693009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.693056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.693237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.693288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.693429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.693486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.693709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.693755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.693965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.694010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.694304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.694357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.694640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.694688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.694976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.695020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.695305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.695351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.142 [2024-11-06 15:43:54.695564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.142 [2024-11-06 15:43:54.695606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.142 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.695740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.695793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.696007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.696051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.696197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.696256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.696469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.696512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.696704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.696747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.696939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.696982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.697281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.697326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.697592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.697636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.697828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.697870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.698032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.698077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.698286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.698332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.698479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.698524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.698755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.698813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.698973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.699018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.699223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.699416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.699460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.699654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.699698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.699924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.699968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.700124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.700181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.700464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.700508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.700712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.700764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.700961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.701006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.701212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.701265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.701469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.701516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.701649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.701694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.701896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.701939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.702147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.702232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.702489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.702534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.702699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.702743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.702999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.143 [2024-11-06 15:43:54.703055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.143 qpair failed and we were unable to recover it. 00:39:27.143 [2024-11-06 15:43:54.703224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.703272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.703481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.703525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.703673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.703718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.703897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.704171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.704228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.704441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.704493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.704711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.704760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.704963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.705160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.705217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.705438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.705481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.705771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.705814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.706102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.706149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.706458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.706505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.706703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.706747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.707015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.707138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.707181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.707411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.707464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.707669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.707717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.707855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.707899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.708134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.708191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.708433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.708490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.708666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.708711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.708991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.709052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.709279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.709325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.709456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.709500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.709693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.709736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.710019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.710061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.710249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.710298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.710563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.144 [2024-11-06 15:43:54.710611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.144 qpair failed and we were unable to recover it. 00:39:27.144 [2024-11-06 15:43:54.710810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.710854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.711065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.711107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.711304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.711350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.711575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.711625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.711843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.711886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.712019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.712062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.712370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.712584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.712627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.712934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.713138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.713181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.713396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.713449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.713671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.713717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.713944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.713988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.714142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.714186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.714464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.714509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.714712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.714756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.714908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.714952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.715174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.715249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.715466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.715509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.715668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.715711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.715872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.715914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.145 [2024-11-06 15:43:54.716169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.145 qpair failed and we were unable to recover it. 00:39:27.145 [2024-11-06 15:43:54.716390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.716435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.716656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.716701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.716843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.716889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.717187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.717243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.717463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.717519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.717711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.717753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.717891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.717934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.718143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.718186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.718481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.718528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.718756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.718935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.718980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.719111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.719154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.719437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.719483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.719690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.719737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.719974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.720031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.720230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.720276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.720404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.720460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.720679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.720721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.720878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.720922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.721220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.721266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.721468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.721522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.721665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.721723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.721934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.721976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.722132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.722180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.722390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.722433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.722705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.722750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.722952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.723000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.146 qpair failed and we were unable to recover it. 00:39:27.146 [2024-11-06 15:43:54.723219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.146 [2024-11-06 15:43:54.723264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.723489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.723536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.723727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.723772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.724033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.724083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.724217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.724262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.724502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.724550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.724686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.724743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.724968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.725014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.725251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.725297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.725559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.725608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.725959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.726161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.726217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.726425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.726609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.726653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.726850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.726902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.727110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.727159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.727484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.727530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.727680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.727726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.727916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.727959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.728169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.728226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.728429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.728479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.728650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.728696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.728912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.728958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.729103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.729148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.729372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.729419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.729679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.729724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.729955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.730003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.730161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.730218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.730351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.730398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.730555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.147 [2024-11-06 15:43:54.730599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.147 qpair failed and we were unable to recover it. 00:39:27.147 [2024-11-06 15:43:54.730906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.730949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.731223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.731444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.731490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.731699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.731744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.731950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.732002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.732290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.732337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.732481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.732532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.732727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.732779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.732988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.733033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.733230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.733279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.733482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.733527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.733858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.734048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.734103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.734309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.734357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.734482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.734532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.734737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.735065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.735110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.735291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.735338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.735592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.735638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.735865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.735909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.736107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.736151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.736344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.736390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.736600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.736644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.736770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.736813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.736959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.737011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.737293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.737342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.737580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.737639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.737875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.148 [2024-11-06 15:43:54.737920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.148 qpair failed and we were unable to recover it. 00:39:27.148 [2024-11-06 15:43:54.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.738233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.738430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.738484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.738694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.738742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.738970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.739015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.739166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.739227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.739526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.739572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.739838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.739886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.740029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.740075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.740269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.740335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.740474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.740517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.740724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.740770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.741100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.741151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.741366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.741429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.741635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.741679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.741818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.741864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.742061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.742108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.742343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.742389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.742687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.742738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.742895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.742942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.743219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.743268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.743469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.743513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.743721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.743764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.743966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.744011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.744181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.744237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.744399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.744444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.744596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.744642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.744853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.744896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.745117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.745161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.745361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.745407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.745557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.745607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.745890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.745938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.746142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.746188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.746411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.746457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.746649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.746693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.747003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.747050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.747251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.747301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.747505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.747550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.747741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.747787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.748022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.748067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.748302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.748348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.748577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.748623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.748840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.748887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.749029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.749073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.749218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.749276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.749496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.749541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.749745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.749789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.749980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.750025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.750281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.750327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.750534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.750577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.750712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.750757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.751026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.426 [2024-11-06 15:43:54.751071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.426 qpair failed and we were unable to recover it. 00:39:27.426 [2024-11-06 15:43:54.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.751290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.751496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.751541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.751803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.751850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.752062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.752302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.752554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.752605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.752920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.753145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.753189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.753480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.753526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.753670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.753714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.753974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.754018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.754151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.754195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.754336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.754381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.754604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.754648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.754923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.754967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.755160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.755229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.755437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.755482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.755789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.755833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.756037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.756081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.756235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.756283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.756491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.756536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.756728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.756773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.756980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.757023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.757190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.757246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.757539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.757589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.757790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.757956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.757999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.758254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.758302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.758521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.758566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.758798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.758841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.758996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.759041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.759244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.759291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.759437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.759491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.759747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.759868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.759912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.760105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.760149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.760375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.760421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.760579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.760622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.760836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.760883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.761040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.761085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.761295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.761344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.761605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.761650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.761907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.761956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.762087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.762140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.762297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.762342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.762561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.762607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.762822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.762865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.763051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.763246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.763293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.763561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.763609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.763874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.764096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.764143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.764420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.764467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.764674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.764717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.764862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.427 [2024-11-06 15:43:54.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.427 qpair failed and we were unable to recover it. 00:39:27.427 [2024-11-06 15:43:54.765036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.765081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.765218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.765266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.765465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.765509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.765712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.765755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.765956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.766000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.766275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.766321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.766543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.766590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.766791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.766835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.767045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.767091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.767294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.767340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.767537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.767583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.767794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.767841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.768136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.768190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.768398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.768442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.768569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.768612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.768768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.768812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.768932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.768977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.769242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.769298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.769490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.769704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.770043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.770089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.770306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.770359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.770585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.770633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.770848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.770897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.771095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.771154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.771297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.771340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.771475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.771519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.771703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.771746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.771956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.772009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.772162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.772218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.772439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.772486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.772620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.772664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.772798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.772842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.773044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.773091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.773296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.773354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.773639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.773684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.773885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.773929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.774152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.774194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.774360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.774680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.774730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.774932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.774978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.775130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.775173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.775418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.775463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.775619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.775663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.775976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.776223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.776275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.776427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.776474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.776684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.776729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.776877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.776922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.777064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.777108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.777366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.777413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.777574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.777619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.777874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.777922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.778142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.778188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.428 [2024-11-06 15:43:54.778362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.428 [2024-11-06 15:43:54.778408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.428 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.778635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.778679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.778807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.778850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.779007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.779299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.779345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.779568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.779615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.779750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.779796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.780076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.780119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.780239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.780286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.780532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.780581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.780798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.780964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.781008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.781229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.781276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.781467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.781521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.781733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.781788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.782078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.782122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.782399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.782444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.782686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.782731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.782926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.782983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.783217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.783265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.783467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.783511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.783736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.783783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.783946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.783991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.784220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.784281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.784581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.784635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.784844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.784888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.785090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.785137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.785362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.785406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.785538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.785591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.785751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.785801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.786016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.786060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.786292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.786343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.786469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.786514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.786702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.786745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.786959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.787009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.787234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.787280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.787492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.787536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.787753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.787796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.788011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.788056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.788306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.788524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.788567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.788707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.788750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.788977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.789020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.789258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.789310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.789547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.789601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.789761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.789805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.790022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.790066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.790216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.790260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.429 qpair failed and we were unable to recover it. 00:39:27.429 [2024-11-06 15:43:54.790503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.429 [2024-11-06 15:43:54.790547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.790811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.790856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.790990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.791033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.791288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.791338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.791551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.791595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.791903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.792147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.792192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.792474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.792519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.792749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.792793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.793042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.793087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.793288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.793334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.793623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.793667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.793822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.793866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.794112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.794155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.794490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.794538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.794680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.794724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.794940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.794984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.795249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.795384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.795428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.795631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.795676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.795887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.795930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.796119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.796163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.796330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.796375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.796503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.796546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.796855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.796900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.797059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.797102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.797312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.797357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.797921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.797965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.798218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.798264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.798469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.798513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.798759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.798911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.798955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.799221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.799265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.799524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.799568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.799789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.799838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.800018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.800161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.800214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.800489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.800794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.800840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.801052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.801097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.801354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.801400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.801601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.801645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.801819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.801863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.802082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.802125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.802353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.802398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.802536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.802580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.802728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.802772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.802953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.803226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.803273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.803470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.803514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.803792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.803835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.804048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.804091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.804239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.804286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.804429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.804473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.430 [2024-11-06 15:43:54.804727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.430 [2024-11-06 15:43:54.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.430 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.804973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.805018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.805238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.805286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.805557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.805608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.805756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.805801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.805944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.806037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.806305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.806639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.806681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.806819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.806862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.807055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.807099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.807314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.807361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.807617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.807659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.807884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.807925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.808102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.808404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.808448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.808583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.808813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.808856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.809047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.809089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.809368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.809410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.809560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.809602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.809807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.809855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.810093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.810144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.810299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.810345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.810551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.810596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.810874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.810917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.811119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.811163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.811450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.811541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.811911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.811997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.812171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.812234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.812435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.812480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.812765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.812808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.813015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.813059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.813225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.813269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.813414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.813458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.813668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.813712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.813880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.814048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.814090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.814370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.814417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.814702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.814745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.815041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.815083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.815355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.815400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.815610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.815653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.815842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.815884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.816147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.816190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.816403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.816446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.816702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.816744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.817020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.817063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.817285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.817341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.817648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.817693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.817996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.818039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.818370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.818627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.818670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.818954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.818997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.819280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.819325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.819530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.819572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.431 qpair failed and we were unable to recover it. 00:39:27.431 [2024-11-06 15:43:54.819845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.431 [2024-11-06 15:43:54.819889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.820179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.820236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.820468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.820511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.820740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.820782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.821082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.821125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.821419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.821473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.821743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.821789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.822029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.822084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.822365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.822410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.822707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.822988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.823029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.823229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.823273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.823516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.823722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.823764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.824076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.824118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.824446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.824707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.824748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.825021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.825064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.825351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.825396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.825709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.825750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.826028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.826072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.826301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.826345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.826637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.826678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.826884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.826926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.827182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.827234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.827504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.827547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.827808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.827850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.828160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.828212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.828432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.828474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.828700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.828743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.829025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.829067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.829269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.829313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.829552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.829599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.829828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.829872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.830150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.830192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.830511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.830556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.830863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.830904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.831136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.831177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.831484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.831527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.831817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.831860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.832161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.832223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.832482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.832525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.832782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.832825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.833107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.833151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.833371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.833703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.834030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.834072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.834355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.834594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.834637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.432 qpair failed and we were unable to recover it. 00:39:27.432 [2024-11-06 15:43:54.834779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.432 [2024-11-06 15:43:54.834821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.835018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.835060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.835341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.835386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.835675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.835718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.836021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.836063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.836357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.836401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.836680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.836723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.837002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.837044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.837328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.837371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.837655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.837698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.837778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:27.433 [2024-11-06 15:43:54.838031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.838075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.838285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.838329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.838595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.838639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.838904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.838946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.839223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.839267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.839473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.839516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.839785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.839829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.840118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.840160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.840424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.840750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.840793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.840994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.841036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.841315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.841359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.841664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.841706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.842024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.842350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.842394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.842669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.842712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.842924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.842968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.843159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.843200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.843418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.843462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.843770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.843813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.844071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.844114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.844393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.844437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.844696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.844739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.844941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.844984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.845260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.845304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.845504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.845547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.845769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:39:27.433 [2024-11-06 15:43:54.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.846196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.846518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.846568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.846855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.846899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.847111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.847155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.847372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.847416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.847694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.847964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.848007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.848231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.848276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.848503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.848545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.848848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.848891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.849108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.849150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.849449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.849495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.849703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.849745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.849965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.850008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.850317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.850362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.850571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.850614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.850933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.850976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.851301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.851347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.433 [2024-11-06 15:43:54.851596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.433 qpair failed and we were unable to recover it. 00:39:27.433 [2024-11-06 15:43:54.851875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.851919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.852198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.852254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.852557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.852600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.852860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.852903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.853100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.853144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.853429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.853474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.853759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.853801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.854060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.854111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.854392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.854437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.854734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.854777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.855049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.855093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.855361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.855405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.855673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.855716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.855941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.855985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.856181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.856233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.856518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.856562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.856881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.856922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.857129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.857172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.857492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.857536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.857869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.857916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.858195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.858254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.858523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.858759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.858801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.858947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.858991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.859268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.859314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.859577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.859873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.859918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.860145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.860187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.860468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.860512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.860637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.860680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.860897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.860939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.861129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.861173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.861411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.861456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.861660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.861703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.861987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.862031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.862258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.862304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.862449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.862492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.862645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.862688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.862967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.863011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.863287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.863333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.863603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.863649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.863899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.863942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.864172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.864225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.864465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.864603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.864645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.864922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.864964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.865240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.865285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.865547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.865597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.865809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.865853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.866054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.866096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.866365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.866410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.866554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.866598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.866784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.866826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.867032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.867077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.867284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.434 qpair failed and we were unable to recover it. 00:39:27.434 [2024-11-06 15:43:54.867589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.434 [2024-11-06 15:43:54.867633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.867861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.867905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.868193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.868245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.868458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.868501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.868702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.868745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.868944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.868987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.869249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.869293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.869489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.869535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.869686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.869743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.870066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.870110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.870327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.870372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.870645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.870688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.870965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.871009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.871333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.871622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.871666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.871886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.871929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.872191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.872249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.872536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.872579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.872858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.872901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.873117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.873161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.873333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.873377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.873590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.873633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.873781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.873824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.874095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.874422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.874468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.874621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.874664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.874942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.874985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.875136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.875179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.875470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.875519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.875828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.876019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.876062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.876341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.876386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.876583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.876632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.876841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.876884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.877093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.877135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.877425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.877470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.877705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.877749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.878027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.878071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.878345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.878390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.878670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.878715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.878999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.879042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.879324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.879369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.879506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.879548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.879746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.879791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.880047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.880090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.880392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.880437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.880701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.880745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.881015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.881059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.881280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.881324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.881460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.881505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.435 qpair failed and we were unable to recover it. 00:39:27.435 [2024-11-06 15:43:54.881776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.435 [2024-11-06 15:43:54.881821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.882103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.882146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.882399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.882443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.882641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.882684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.882890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.882982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.883226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.883270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.883505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.883699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.883741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.884006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.884393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.884480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.884839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.885192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.885258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.885500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.885546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.885827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.885871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.886025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.886070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.886327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.886373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.886644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.886698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.886965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.887020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.887382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.887602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.887645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.887869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.887914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.888099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.888142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.888416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.888697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.888740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.888968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.889011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.889230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.889276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.889478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.889521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.889675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.889718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.890029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.890075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.890336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.890382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.890604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.890647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.890956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.890998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.891259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.891305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.891578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.891621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.891844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.892220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.892265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.892553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.892808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.892851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.893103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.893147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.893381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.893536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.893594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.893855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.893900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.894054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.894095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.894245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.894291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.894499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.894541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.894732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.894776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.894977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.895020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.895278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.895322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.895619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.895662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.895887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.895934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.896248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.896535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.896582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.896873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.896915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.897172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.897225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.897497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.897541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.436 [2024-11-06 15:43:54.897844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.436 qpair failed and we were unable to recover it. 00:39:27.436 [2024-11-06 15:43:54.898124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.898168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.898494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.898543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.898715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.898761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.899056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.899099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.899264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.899309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.899511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.899554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.899835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.899883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.900173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.900238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.900458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.900501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.900695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.900987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.901029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.901352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.901398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.901609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.901653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.901984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.902242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.902287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.902521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.902564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.902865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.902910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.903190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.903241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.903517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.903563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.903786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.903830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.904034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.904079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.904362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.904408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.904609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.904879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.904922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.905225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.905269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.905468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.905513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.905764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.905807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.906063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.906106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.906373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.906419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.906638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.906681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.906965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.907008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.907146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.907189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.907410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.907453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.907743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.907791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.908092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.908139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.908313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.908363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.908604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.908813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.908855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.909181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.909237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.909443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.909487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.909696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.909740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.909895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.909939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.910137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.910181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.910448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.910491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.910628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.910671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.910877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.910921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.911110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.911163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.911380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.911427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.911586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.911632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.911918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.911964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.912161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.912214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.912496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.912539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.912848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.912891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.913178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.913229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.437 [2024-11-06 15:43:54.913461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.437 [2024-11-06 15:43:54.913503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.437 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.913750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.913792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.914047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.914090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.914239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.914285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.914560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.914604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.914803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.914846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.915132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.915174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.915417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.915461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.915757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.915799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.916003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.916046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.916328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.916372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.916584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.916627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.916898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.916940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.917196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.917249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.917522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.917828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.917872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.918168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.918224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.918449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.918492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.918769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.918812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.919036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.919084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.919352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.919400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.919609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.919654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.919873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.919919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.920137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.920179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.920474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.920515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.920741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.920784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.921050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.921093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.921295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.921340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.921597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.921640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.921918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.921961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.922168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.922220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.922444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.922489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.922690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.922740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.922950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.922993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.923223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.923267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.923476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.923519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.923727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.923769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.924027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.924070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.924345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.924547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.924589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.924912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.924955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.925174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.925230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.925441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.925484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.925686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.925728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.925984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.926026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.926229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.926273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.926481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.926525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.926793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.926836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.927073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.927116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.927445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.438 [2024-11-06 15:43:54.927491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.438 qpair failed and we were unable to recover it. 00:39:27.438 [2024-11-06 15:43:54.927630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.927671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.927946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.927989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.928245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.928290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.928569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.928611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.928757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.928799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.929011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.929054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.929360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.929404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.929635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.929680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.929975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.930018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.930357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.930405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.930623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.930963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.931225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.931268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.931532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.931784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.931828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.932054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.932096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.932296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.932340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.932548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.932591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.932800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.932843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.933074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.933116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.933342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.933385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.933637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.933891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.933946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.934227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.934272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.934491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.934533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.934840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.934884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.935211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.935460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.935503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.935705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.935747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.935953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.935996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.936310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.936355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.936549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.936605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.936915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.936958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.937175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.937227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.937509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.937552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.937777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.937818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.938065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.938109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.938307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.938352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.938567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.938816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.938858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.939161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.939417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.939460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.939686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.939729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.940008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.940050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.940249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.940293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.940502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.940701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.940744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.940978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.941020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.941176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.941229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.941530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.941583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.941741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.941785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.942058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.942101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.942352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.942398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.439 [2024-11-06 15:43:54.942672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.439 qpair failed and we were unable to recover it. 00:39:27.439 [2024-11-06 15:43:54.942985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.943028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.943252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.943298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.943562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.943605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.943813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.943856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.944166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.944221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.944435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.944478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.944687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.944731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.944955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.944999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.945255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.945299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.945538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.945580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.945789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.945832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.946114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.946157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.946407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.946452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.946671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.946715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.947003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.947045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.947317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.947362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.947650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.947694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.947932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.947975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.948192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.948245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.948501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.948544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.948825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.948870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.949128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.949184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.949467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.949511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.949792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.949835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.950093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.950135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.950393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.950437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.950742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.950786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.951025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.951068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.951366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.951410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.951642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.951685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.951989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.952033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.952294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.952338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.952569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.952614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.952873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.953218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.953263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.953541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.953592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.953716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:27.440 [2024-11-06 15:43:54.953749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:27.440 [2024-11-06 15:43:54.953761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:27.440 [2024-11-06 15:43:54.953772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:27.440 [2024-11-06 15:43:54.953780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:27.440 [2024-11-06 15:43:54.953893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.953935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.954213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.954259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.954424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.954466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.954629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.954671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.954897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.954941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.955255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.955301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.955448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.955491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.955765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.955807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.956004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.956047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.956288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.956335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.956343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:27.440 [2024-11-06 15:43:54.956503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.956552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.956711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.956754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.956894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.956936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.957107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:27.440 [2024-11-06 15:43:54.957125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:27.440 [2024-11-06 15:43:54.957150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:27.440 [2024-11-06 15:43:54.957193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.957470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.957512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.957760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.957802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.440 [2024-11-06 15:43:54.958103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.440 [2024-11-06 15:43:54.958147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.440 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.958365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.958410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.958554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.958598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.958800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.958844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.959125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.959169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.959504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.959563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.959866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.959919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.960225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.960271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.960551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.960594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.960755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.960798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.961071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.961114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.961328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.961373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.961534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.961578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.961881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.962233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.962276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.962502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.962546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.962896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.962938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.963231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.963274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.963534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.963577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.963891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.963934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.964200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.964253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.964491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.964534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.964739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.964782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.965043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.965085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.965294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.965339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.965575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.965618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.965886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.965928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.966062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.966106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.966372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.966418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.966570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.966614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.966884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.967161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.967469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.967513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.967710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.967768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.968037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.968081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.968230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.968275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.968534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.968578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.968723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.968766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.969069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.969113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.969320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.969366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.969509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.969551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.969853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.970136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.970180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.970358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.970401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.970692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.970736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.971014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.971057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.971258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.971301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.971542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.971586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.971915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.971958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.972169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.972222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.972430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.441 [2024-11-06 15:43:54.972473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.441 qpair failed and we were unable to recover it. 00:39:27.441 [2024-11-06 15:43:54.972756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.972805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.973047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.973103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.973324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.973386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.973685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.973900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.973944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.974227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.974271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.974482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.974524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.974781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.974824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.975102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.975144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.975427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.975488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.975706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.975771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.976105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.976156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.976370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.976414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.976574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.976617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.976829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.976873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.977079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.977122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.977417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.977463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.977749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.977793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.978014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.978056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.978339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.978384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.978536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.978579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.978783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.979085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.979135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.979356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.979401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.979699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.979742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.979970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.980013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.980232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.980276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.980482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.980526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.980722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.980765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.980970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.981012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.981315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.981359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.981611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.981655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.981885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.981928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.982186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.982239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.982384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.982427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.982615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.982658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.982984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.983027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.983243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.983288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.983594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.983641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.983890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.983946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.984220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.984265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.984523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.984566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.984760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.984803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.985049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.985189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.985245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.985394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.985438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.985584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.985627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.985867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.985910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.986169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.986227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.986479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.986534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.986776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.987061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.987106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.987359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.987706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.987750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.988032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.988075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.442 qpair failed and we were unable to recover it. 00:39:27.442 [2024-11-06 15:43:54.988301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.442 [2024-11-06 15:43:54.988346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.988606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.988649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.988875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.988919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.989147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.989189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.989480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.989524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.989828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.989871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.990136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.990180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.990492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.990545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.990703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.990746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.991027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.991070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.991272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.991317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.991530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.991573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.991835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.991878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.992088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.992130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.992404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.992448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.992678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.992721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.992929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.993239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.993283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.993576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.993619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.993826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.993870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.994125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.994168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.994451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.994706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.994750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.995033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.995075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.995337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.995382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.995657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.995710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.995930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.995972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.996257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.996301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.996589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.996633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.996884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.997263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.997489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.997532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.997838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.997880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.998089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.998132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.998461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.998536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.998764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.998823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.999159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.999502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.999547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:54.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:54.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.000037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.000080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.000224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.000270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.000480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.000522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.000732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.000776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.001003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.001047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.001235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.001280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.001548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.001593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.001811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.001857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.002158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.002223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.002479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.002736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.002779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.003086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.003128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.003391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.003436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.003718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.003761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.004040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.004083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.004254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.004298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.443 qpair failed and we were unable to recover it. 00:39:27.443 [2024-11-06 15:43:55.004527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.443 [2024-11-06 15:43:55.004569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.004721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.004764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.004990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.005033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.005308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.005354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.005514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.005557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.005761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.006034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.006077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.006327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.006372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.006526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.006569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.006862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.006905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.007182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.007235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.007446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.007489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.007636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.007679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.007836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.007881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.008074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.008117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.008322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.008365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.008516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.008557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.008762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.008805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.009008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.009051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.009277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.009333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.009507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.009550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.009906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.009950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.010237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.010283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.010492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.010536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.010743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.010786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.011043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.011086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.011363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.011409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.011631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.011675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.011939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.012263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.012308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.012534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.012576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.012738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.012780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.012933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.012992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.013270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.013462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.013504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.013695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.013738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.014024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.014065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.014259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.014303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.014466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.014509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.014657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.014700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.015079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.015122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.015352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.015396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.015557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.015599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.015762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.016008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.016051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.016285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.016329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.016596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.016640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.016999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.017042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.017323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.017367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.017575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.017617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.017880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.018129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.018171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.018438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.018780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.018823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.444 [2024-11-06 15:43:55.019040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.444 [2024-11-06 15:43:55.019082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.444 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.019343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.019387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.019691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.020055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.020377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.020421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.020701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.020758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.021050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.021095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.021364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.021409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.021698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.021741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.022035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.022078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.022297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.022341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.022550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.022594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.022855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.022897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.023135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.023179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.023452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.023497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.023792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.023836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.024042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.024084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.024352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.024397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.024663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.024713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.024912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.025227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.025272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.025531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.025574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.025738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.025781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.026014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.026341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.026549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.026592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.026794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.026837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.027049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.027093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.027380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.027425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.027668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.027711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.028015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.028058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.028338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.028385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.028676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.028720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.029057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.029102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.029418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.029626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.029669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.029940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.029984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.030272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.030318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.030614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.030657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.030889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.030933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.031127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.031170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.031438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.031481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.031682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.031726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.031964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.032008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.032156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.032469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.032537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.032770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.032821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.033116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.033160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.033505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.033827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.033872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.034150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.034194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.445 [2024-11-06 15:43:55.034422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.445 [2024-11-06 15:43:55.034466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.445 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.034728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.034772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.035053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.035097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.035384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.035429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.035595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.035639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.035927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.035971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.036248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.036293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.036502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.036547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.036870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.036915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.037171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.037225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.037447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.037492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.037720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.037770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.038031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.038089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.038369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.038415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.038587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.038632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.038914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.038958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.039166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.039222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.039381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.039425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.039582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.039626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.039971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.040016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.040161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.040218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.040389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.040435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.040641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.040684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.040961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.041006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.041240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.041287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.041500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.041544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.041684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.041727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.041958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.042003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.042311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.042358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.042562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.042606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.042840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.042886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.043033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.043078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.446 [2024-11-06 15:43:55.043309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.446 [2024-11-06 15:43:55.043356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.446 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.043504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.043549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.043681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.043733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.044026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.044072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.044267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.044313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.044575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.044621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.044902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.044948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.045224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.045271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.045432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.045631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.045676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.045942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.046133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.046178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.046357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.046404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.046559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.046603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.046817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.046860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.047063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.047108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.047352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.047554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.722 [2024-11-06 15:43:55.047599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.722 qpair failed and we were unable to recover it. 00:39:27.722 [2024-11-06 15:43:55.047889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.047934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.048226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.048273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.048505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.048551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.048777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.049122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.049364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.049412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.049670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.049715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.050028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.050081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.050359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.050405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.050568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.050614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.050848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.050890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.051180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.051254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.051398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.051441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.051593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.051637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.051841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.051883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.052087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.052130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.052354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.052399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.052707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.052750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.053034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.053076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.053276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.053321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.053563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.053606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.053859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.053902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.054157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.054199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.054406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.054565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.054614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.054858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.054903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.055181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.055460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.055504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.055761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.055805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.056074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.056116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.056407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.056612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.056871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.056913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.057101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.057144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.057368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.057412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.057539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.057581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.723 qpair failed and we were unable to recover it. 00:39:27.723 [2024-11-06 15:43:55.057788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.723 [2024-11-06 15:43:55.057830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.057979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.058332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.058377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.058601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.058644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.058886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.058929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.059138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.059180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.059444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.059673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.059715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.060024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.060067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.060336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.060381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.060685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.060728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.060999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.061042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.061336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.061380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.061592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.061635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.061942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.061984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.062229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.062274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.062430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.062474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.062701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.062743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.063001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.063044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.063237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.063282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.063527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.063846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.063889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.064188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.064259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.064536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.064578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.064791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.064833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.064967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.065011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.065166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.065218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.065488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.065531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.065739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.065788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.065977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.066020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.066274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.066318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.066533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.066576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.066737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.066780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.067109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.067151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.067394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.067439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.067599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.067642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.724 [2024-11-06 15:43:55.067919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.724 [2024-11-06 15:43:55.067962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.724 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.068225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.068269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.068501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.068545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.068746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.068788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.069046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.069089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.069310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.069355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.069594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.069637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.069899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.069942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.070247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.070291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.070448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.070492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.070640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.070682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.070944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.070987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.071176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.071227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.071395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.071438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.071659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.071700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.071994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.072037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.072303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.072348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.072516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.072786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.072842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.073149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.073194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.073362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.073404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.073639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.073682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.073974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.074018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.074227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.074271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.074471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.074515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.074723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.074766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.074973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.075246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.075291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.075452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.075496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.075692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.075734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.075937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.075979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.076294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.076338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.076557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.076606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.076839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.076882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.077189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.077242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.077609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.077826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.725 [2024-11-06 15:43:55.077869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.725 qpair failed and we were unable to recover it. 00:39:27.725 [2024-11-06 15:43:55.078091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.078134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.078304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.078350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.078513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.078555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.078714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.078757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.078982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.079025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.079329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.079374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.079581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.079624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.079839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.079881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.080095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.080138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.080349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.080392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.080623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.080667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.080970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.081013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.081263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.081307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.081583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.081625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.081842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.082187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.082238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.082486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.082529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.082808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.082851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.083123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.083166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.083488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.083565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.083892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.083954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.084189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.084256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.084481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.084525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.084686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.084730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.084936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.084980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.085261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.085306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.085463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.085506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.085781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.085824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.086082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.086124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.086270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.086314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.086544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.086588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.086786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.086830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.087111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.087154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.087379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.087424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.087664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.087712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.087954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.088058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.726 qpair failed and we were unable to recover it. 00:39:27.726 [2024-11-06 15:43:55.088335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.726 [2024-11-06 15:43:55.088381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.088532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.088575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.088830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.088873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.089090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.089134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.089491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.089535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.089807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.090014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.090057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.090353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.090397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.090611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.090654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.090908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.090950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.091230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.091274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.091476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.091519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.091663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.091706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.092017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.092060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.092267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.092312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.092548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.092591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.092782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.092825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.093032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.093075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.093294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.093502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.093546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.093852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.093896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.094131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.094367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.094411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.094623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.094666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.094919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.094962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.095164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.095216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.095481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.095525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.095755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.095798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.096013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.096055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.096324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.096370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.096592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.727 [2024-11-06 15:43:55.096635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.727 qpair failed and we were unable to recover it. 00:39:27.727 [2024-11-06 15:43:55.096862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.096905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.097126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.097169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.097395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.097446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.097660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.097705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.097919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.097963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.098243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.098288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.098538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.098581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.098792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.098835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.099091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.099143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.099474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.099518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.099750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.099793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.100082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.100125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.100433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.100478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.100762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.100805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.101001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.101043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.101279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.101324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.101602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.101645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.101866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.101908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.102220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.102264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.102413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.102455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.102660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.102703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.102912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.102954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.103181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.103237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.103476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.103523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.103831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.103887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.104168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.104221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.104430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.104474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.104668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.104709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.104999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.105042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.105312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.105358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.105606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.105648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.105896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.105940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.106104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.106146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.106484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.106529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.106761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.106803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.107008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.107052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.728 [2024-11-06 15:43:55.107377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.728 [2024-11-06 15:43:55.107420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.728 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.107691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.107734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.108048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.108092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.108301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.108345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.108510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.108554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.108812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.108856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.109056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.109099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.109264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.109309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.109480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.109524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.109790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.109833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.110126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.110169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.110391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.110435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.110749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.110799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.111045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.111088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.111303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.111347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.111578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.111773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.112122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.112164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.112395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.112447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.112635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.112692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.113000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.113045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.113297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.113345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.113545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.113588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.113732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.113778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.114035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.114078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.114330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.114378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.114558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.114808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.115142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.115188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.115419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.115463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.115686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.115729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.115923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.115967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.116226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.116274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.116427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.116471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.116667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.116710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.116904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.116949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.117158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.729 [2024-11-06 15:43:55.117213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.729 qpair failed and we were unable to recover it. 00:39:27.729 [2024-11-06 15:43:55.117375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.117420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.117678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.117723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.117956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.118238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.118286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.118633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.119113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.119329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.119374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.119619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.119664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.119887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.119930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.120123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.120382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.120428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.120701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.120745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.120897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.120942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.121135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.121178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.121389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.121554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.121605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.121886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.121930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.122146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.122490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.122797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.122842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.123063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.123109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.123334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.123381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.123610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.123653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.123956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.124000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.124221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.124268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.124477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.124521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.124670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.124713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.124854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.124898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.125122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.125170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.125385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.125430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.125564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.125607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.125810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.125853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.126056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.126100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.126429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.126474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.126630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.126675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.126875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.126920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.730 [2024-11-06 15:43:55.127133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.730 [2024-11-06 15:43:55.127177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.730 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.127477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.127524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.127778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.127822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.128078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.128123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.128316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.128361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.128571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.128615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.128790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.128839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.129072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.129119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.129470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.129527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.129695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.129742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.130024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.130346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.130392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.130714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.130938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.130983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.131216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.131262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.131479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.131524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.131724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.131768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.132024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.132068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.132333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.132378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.132882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.132929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.133224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.133270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.133529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.133573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.133779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.133824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.134126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.134169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.134415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.134465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.134709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.134755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.135065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.135109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.135320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.135365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.135497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.135540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.135761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.135806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.136009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.136051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.136272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.136614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.136658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.136813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.136856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.731 qpair failed and we were unable to recover it. 00:39:27.731 [2024-11-06 15:43:55.137086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.731 [2024-11-06 15:43:55.137127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.137340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.137385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.137658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.137702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.137980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.138023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.138246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.138291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.138485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.138540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.138865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.138909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.139187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.139245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.139549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.139593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.139851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.139895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.140102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.140145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.140376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.140427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.140588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.140633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.140762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.140806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.141003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.141046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.141275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.141320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.141575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.141619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.141827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.141872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.142126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.142169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.142375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.142421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.142680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.142724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.143039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.143082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.143413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.143623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.143679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.143954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.144004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.144162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.144219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.144464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.144505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.144665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.144708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.145026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.145075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.145441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.145489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.145694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.145739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.732 [2024-11-06 15:43:55.145949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.732 [2024-11-06 15:43:55.145993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.732 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.146219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.146264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.146533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.146584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.146852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.146901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.147186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.147241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.147392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.147435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.147592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.147644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.147973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.148018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.148225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.148273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.148441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.148485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.148750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.148797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.149089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.149425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.149471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.149736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.149784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.150051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.150105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.150371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.150707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.150921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.150966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.151181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.151237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.151430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.151674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.151724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.151931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.151974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.152166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.152223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.152360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.152402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.152597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.152642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.152932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.152975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.153200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.153257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.153577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.153619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.153778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.154100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.154148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.154358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.154402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.154618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.154661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.154867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.154910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.155171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.155233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.155440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.155483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.155713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.155758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.156043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.156086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.733 qpair failed and we were unable to recover it. 00:39:27.733 [2024-11-06 15:43:55.156352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.733 [2024-11-06 15:43:55.156396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.156537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.156829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.156872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.157434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.157479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.157692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.157735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.157968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.158011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.158287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.158589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.158632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.158867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.158911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.159196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.159249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.159464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.159506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.159662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.159706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.159953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.159998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.160236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.160294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.160450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.160494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.160654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.160696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.161022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.161066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.161349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.161394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.161653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.161696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.162019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.162062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.162300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.162345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.162507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.162551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.162782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.162837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.163155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.163214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.163455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.163510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.163767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.163810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.164016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.164059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.164284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.164328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.164620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.164668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.164924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.164976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.165129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.165173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.165397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.165443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.165589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.165632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.165885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.165929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.166114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.166158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.166397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.734 [2024-11-06 15:43:55.166450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.734 qpair failed and we were unable to recover it. 00:39:27.734 [2024-11-06 15:43:55.166595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.166647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.166804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.166848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.167128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.167173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.167401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.167447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.167654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.167699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.167891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.167936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.168124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.168167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.168385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.168432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.168590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.168632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.168832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.168875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.169069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.169309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.169354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.169482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.169525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.169735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.169778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.169971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.170014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.170149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.170192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.170339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.170381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.170519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.170562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.170791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.170975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.171017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.171286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.171511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.171555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.171720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.171874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.171916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.172058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.172102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.172396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.172446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.172770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.173019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.173074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.173225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.173272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.173462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.173666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.173710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.173944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.174091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.174133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.174431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.174476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.174695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.174739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.174942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.174986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.735 qpair failed and we were unable to recover it. 00:39:27.735 [2024-11-06 15:43:55.175181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.735 [2024-11-06 15:43:55.175258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.175449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.175493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.175632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.175676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.175873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.175917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.176130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.176175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.176447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.176491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.176679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.176723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.176915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.177180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.177234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.177431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.177474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.177624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.177668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.177924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.177970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.178174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.178230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.178401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.178445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.178641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.178897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.178940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.179083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.179127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.179344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.179390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.179532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.179576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.179771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.179817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.180043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.180087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.180219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.180266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.180457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.180501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.180762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.180941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.180987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.181188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.181244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.181396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.181440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.181574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.181619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.181759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.181805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.182014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.182059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.182255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.182310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.182505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.736 [2024-11-06 15:43:55.182552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.736 qpair failed and we were unable to recover it. 00:39:27.736 [2024-11-06 15:43:55.182689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.182735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.182927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.182972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.183119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.183163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.183340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.183404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.183565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.183627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.183791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.183845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.183984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.184032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.184165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.184371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.184417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.184629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.184674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.184826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.184870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.185074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.185118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.185266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.185313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.185451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.185497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.185777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.185824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.186081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.186126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.186300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.186346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.186483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.186527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.186758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.186810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.186937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.186994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.187215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.187261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.187453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.187497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.187697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.187871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.187915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.188043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.188085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.188230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.188276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.188484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.188529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.188643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.188686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.188879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.188923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.189116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.189158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.189392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.189437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.189638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.189821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.189865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.190099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.190143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.190307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.190530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.737 qpair failed and we were unable to recover it. 00:39:27.737 [2024-11-06 15:43:55.190657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.737 [2024-11-06 15:43:55.190700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.190892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.191074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.191124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.191291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.191336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.191459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.191504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.191693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.191737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.191926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.191969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.192122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.192165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.192327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.192386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.192545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.192598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.192916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.192960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.193122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.193166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.193396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.193597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.193644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.193846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.193890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.194039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.194084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.194250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.194296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.194551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.194595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.194735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.194780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.194909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.194952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.195073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.195116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.195258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.195302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.195416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.195459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.195590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.195634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.195760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.195806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.196089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.196133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.196332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.196376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.196662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.196706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.196853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.196896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.197070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.197267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.197447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.197685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.197996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.198040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.738 [2024-11-06 15:43:55.198188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.738 [2024-11-06 15:43:55.198246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.738 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.198465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.198508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.198640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.198683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.198804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.198849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.198989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.199034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.199241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.199420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.199471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.199666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.199711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.199845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.199890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.200031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.200075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.200193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.200251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.200456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.200502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.200629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.200674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.200893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.200938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.201141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.201186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.201327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.201371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.201518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.201564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.201776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.201844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.202053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.202098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.202324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.202471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.202515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.202718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.202906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.203117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.203161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.203298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.203344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.203558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.203747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.203791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.203928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.203973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.204099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.204144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.204406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.204524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.204569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.204699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.204745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.204924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.205118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.205163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.205373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.205551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.205592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.205782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.739 [2024-11-06 15:43:55.205824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.739 qpair failed and we were unable to recover it. 00:39:27.739 [2024-11-06 15:43:55.206045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.206086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.206227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.206269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.206463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.206503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.206644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.206684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.206819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.206859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.207898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.207939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.208119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.208159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.208355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.208398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.208590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.208632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.208758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.208797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.208912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.208953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.209131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.209325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.209503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.209680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.209855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.209965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.210006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.210231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.210273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.210395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.210436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.210623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.210663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.210872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.211092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.211134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.211271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.211313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.211586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.211627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.211894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.212025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.212065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.212300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.212342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.212474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.212514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.212642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.212683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.212823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.212864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.213135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.213175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.740 qpair failed and we were unable to recover it. 00:39:27.740 [2024-11-06 15:43:55.213496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.740 [2024-11-06 15:43:55.213537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.213722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.213761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.213911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.213952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.214155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.214338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.214380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.214495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.214534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.214681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.214722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.214954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.214998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.215138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.215190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.215397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.215438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.215580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.215621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.215793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.215833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.216055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.216096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.216278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.216335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.216488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.216525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.216658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.216696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.216841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.216878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.217000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.217036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.217154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.217191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.217379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.217417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.217597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.217635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.217815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.217853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.218082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.218242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.218280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.218409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.218446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.218590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.218628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.218874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.219066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.219309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.219349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.219530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.219568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.219764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.219802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.741 [2024-11-06 15:43:55.220064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.741 [2024-11-06 15:43:55.220102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.741 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.220285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.220323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.220589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.220927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.220964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.221256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.221458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.221496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.221614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.221652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.221780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.221817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.222001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.222038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.222161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.222199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.222377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.222415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.222603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.222642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.222839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.222877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.223072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.223111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.223243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.223282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.223481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.223518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.223717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.223755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.223997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.224034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.224254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.224293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.224432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.224469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.224656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.224692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.225049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.225293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.225337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.225560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.225598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.225820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.225857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.226166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.226225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.226374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.226415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.226631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.226671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.226888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.226929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.227166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.227215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.227466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.227798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.227839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.228084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.228125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.228316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.228358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.228558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.228600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.228799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.229083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.742 [2024-11-06 15:43:55.229125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.742 qpair failed and we were unable to recover it. 00:39:27.742 [2024-11-06 15:43:55.229372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.229414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.229632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.229673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.230252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.230565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.230618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.230923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.230964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.231238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.231281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.231416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.231457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.231670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.231710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.231854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.231896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.232025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.232066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.232217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.232258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.232468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.232509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.232758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.232798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.233113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.233154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.233470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.233512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.233710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.233750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.234037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.234077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.234344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.234387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.234663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.234703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.234913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.234954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.235178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.235364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.235655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.235696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.236152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.236456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.236501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.236693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.237010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.237345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.237391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.237606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.237651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.237950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.237995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.238142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.238186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.238421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.238466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.238749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.239050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.239093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.743 [2024-11-06 15:43:55.239386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.743 [2024-11-06 15:43:55.239433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.743 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.239574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.239618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.239904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.239947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.240149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.240193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.240354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.240399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.240599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.240746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.240790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.240990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.241035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.241270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.241513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.241559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.241777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.242037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.242081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.242358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.242404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.242662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.242707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.242944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.242988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.243141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.243185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.243432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.243486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.243695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.243741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.243947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.243992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.244148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.244193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.244421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.244466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.244673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.244718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.244952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.244997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.245289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.245336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.245535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.245580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.245717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.245761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.246055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.246100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.246376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.246423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.246586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.246631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.246914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.246966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.247180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.247236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.247381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.247425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.247682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.247726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.247930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.247975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.248241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.248286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.248429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.248474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.744 [2024-11-06 15:43:55.248684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.744 [2024-11-06 15:43:55.248728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.744 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.248937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.248981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.249216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.249262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.249415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.249460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.249672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.249717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.250011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.250056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.250313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.250360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.250509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.250768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.250981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.251025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.251317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.251363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.251609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.251655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.251808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.251863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.252057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.252102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.252260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.252304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.252460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.252505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.252668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.252712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.252945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.252990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.253253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.253299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.253460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.253505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.253687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.253748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.254068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.254115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.254378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.254425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.254583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.254628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.254824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.254869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.255067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.255112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.255318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.255366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.255571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.255616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.255839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.255884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.256101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.256146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.256339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.256600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.256644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.256916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.256961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.257277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.257332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.257475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.257533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.257742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.257787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.258065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.258110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.745 [2024-11-06 15:43:55.258336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.745 [2024-11-06 15:43:55.258382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.745 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.258599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.258644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.258956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.259003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.259321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.259367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.259524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.259569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.259786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.259831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.260159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.260305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.260351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.260630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.260675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.261017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.261061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.261347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.261396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.261611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.261654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.261950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.261995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.262224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.262270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.262530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.262575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.262769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.262813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.262955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.263000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.263294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.263341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.263621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.263665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.263969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.264014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.264330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.264376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.264717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.264984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.265029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.265345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.265411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.265577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.265846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.265891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.266147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.266191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.266497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.266543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.266746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.267094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.267138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.267422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.267467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.267601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.267646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.746 qpair failed and we were unable to recover it. 00:39:27.746 [2024-11-06 15:43:55.267938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.746 [2024-11-06 15:43:55.267982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.268223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.268269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.268463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.268508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.268725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.268769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.269025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.269076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.269295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.269341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.269565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.269609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.269840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.269889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.270108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.270165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.270343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.270388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.270589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.270632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.270942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.270987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.271220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.271266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.271483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.271527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.271681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.271726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.271967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.272012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.272221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.272266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.272551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.272595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.272804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.272850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.273118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.273164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.273521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.273806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.273852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.274125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.274170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.274455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.274502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.274732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.274776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.275062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.275106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.275314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.275360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.275628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.275674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.275991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.276036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.276249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.276295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.276518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.276563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.276787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.276844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.277164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.277220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.277442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.277602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.277647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.277997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.278041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.747 [2024-11-06 15:43:55.278299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.747 [2024-11-06 15:43:55.278345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.747 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.278548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.278592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.278801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.278845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.278990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.279036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.279302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.279349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.279512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.279557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.279780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.279825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.280056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.280100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.280307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.280360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.280566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.280610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.280920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.280965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.281268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.281313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.281512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.281557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.281913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.281958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.282243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.282289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.282499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.282544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.282694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.282738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.282945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.282990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.283219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.283264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.283519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.283563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.283716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.283760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.283981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.284025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.284335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.284381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.284540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.284585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.284829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.285091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.285136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.285443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.285489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.285748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.285792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.286104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.286149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.286417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.286463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.286667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.286897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.286941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.287136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.287180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.287472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.287827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.288169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.288235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.288529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.748 [2024-11-06 15:43:55.288576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.748 qpair failed and we were unable to recover it. 00:39:27.748 [2024-11-06 15:43:55.288830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.288881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.289104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.289148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.289345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.289393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.289595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.289893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.290148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.290193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.290472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.290517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.290729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.290773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.291094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.291140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.291387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.291434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.291707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.291751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.291990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.292042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.292328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.292375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.292635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.292679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.292839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.292884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.293075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.293119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.293376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.293422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.293578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.293623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.293890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.293934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.294144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.294188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.294411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.294455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.294759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.294803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.295055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.295100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.295318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.295364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.295621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.295665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.295967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.296012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.296167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.296222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.296481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.296526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.296716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.296761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.297001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.297045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.297332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.297378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.297586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.297630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.297864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.297908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.298109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.298153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.298398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.298444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.298608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.298653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.749 qpair failed and we were unable to recover it. 00:39:27.749 [2024-11-06 15:43:55.298982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.749 [2024-11-06 15:43:55.299026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.299237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.299283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.299592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.299945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.299996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.300319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.300368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.300632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.300677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.300888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.300932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.301138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.301182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.301408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.301454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.301662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.301707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.302048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.302092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.302326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.302373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.302573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.302619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.302876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.302919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.303246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.303291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.303449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.303494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.303709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.303755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.303944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.304000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.304195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.304255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.304511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.304695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.304739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.304949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.304993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.305216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.305262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.305420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.305464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.305669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.305924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.305969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.306162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.306403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.306447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.306661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.306705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.306961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.307235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.307281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.307481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.307526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.307728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.307773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.307920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.307964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.308179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.308233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.308538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.308584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.750 [2024-11-06 15:43:55.308798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.750 [2024-11-06 15:43:55.308843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.750 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.309080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.309126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.309338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.309385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.309597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.309642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.309876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.310224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.310269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.310517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.310567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.310771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.310816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.311073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.311116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.311324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.311371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.311562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.311607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.311812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.311856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.312105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.312151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.312370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.312416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.312681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.312725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.312961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.313005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.313332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.313531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.313576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.313844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.313888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.314073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.314118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.314463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.314509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.314745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.314791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.314953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.315000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.315221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.315268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.315475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.315519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.315724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.315769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.315975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.316020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.316299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.316346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.316559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.316604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.316889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.316932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.317081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.317125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.317349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.317395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.751 qpair failed and we were unable to recover it. 00:39:27.751 [2024-11-06 15:43:55.317615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.751 [2024-11-06 15:43:55.317660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.317882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.318191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.318246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.318477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.318522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.318736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.318781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.318987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.319031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.319246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.319293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.319516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.319561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.319771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.320123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.320167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.320346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.320400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.320670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.320936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.320987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.321193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.321251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.321392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.321445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.321686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.321732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.321939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.321984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.322193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.322252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.322480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.322525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.322782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.322982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.323027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.323261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.323308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.323468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.323513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.323702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.323747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.323950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.323994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.324272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.324318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.324464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.324509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.324766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.324811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.325016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.325061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.325282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.325328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.325534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.325578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.325720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.325765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.326366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.326412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.326614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.326666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.326922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.327198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.752 [2024-11-06 15:43:55.327255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.752 qpair failed and we were unable to recover it. 00:39:27.752 [2024-11-06 15:43:55.327539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.327583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.327823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.327869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.328158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.328214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.328522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.328731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.328776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.329057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.329102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.329305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.329352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.329620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.329665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.329960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.330005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.330287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.330333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.330547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.330592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.330800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.330845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.331041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.331086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.331290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.331336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.331540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.331584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.331804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.331849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.332153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.332198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.332563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.332817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.332861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.333185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.333257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.333562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.333845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.333890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.334094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.334138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.334386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.334434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.334694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.334739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.334949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.334993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.335253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.335508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.335552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.335803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.335848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.336055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.336100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.336336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.336382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.336585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.336629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.336841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.336886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.337141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.337187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.337489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.337534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.337757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.753 [2024-11-06 15:43:55.337802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.753 qpair failed and we were unable to recover it. 00:39:27.753 [2024-11-06 15:43:55.338009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.338052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.338200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.338257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.338541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.338586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.338844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.338888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.339100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.339144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.339352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.339398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.339626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.339670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.339872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.339917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.340139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.340185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.340416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.340462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.340721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.340765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.340969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.341014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.341273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.341319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.341516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.341560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.341757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.341802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.341933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.341976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:27.754 [2024-11-06 15:43:55.342175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:27.754 [2024-11-06 15:43:55.342231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:27.754 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.342466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.342511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.342799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.342845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.343052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.343096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.343359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.343406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.343675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.343725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.343920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.343970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.344166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.344224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.344434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.344478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.344782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.344992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.345038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.345256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.345303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.345432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.345478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.345712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.345757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.345974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.346019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.346306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.346352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.346565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.346611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.346745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.346788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.347107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.347151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.347363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.347410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.347646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.347690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.347948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.348218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.348264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.348466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.348510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.348702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.348748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.349055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.349099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.349319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.349365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.349521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.349565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.349801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.350062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.350107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.350381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.350427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.350727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.350773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.350989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.351035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.035 [2024-11-06 15:43:55.351330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.035 [2024-11-06 15:43:55.351377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.035 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.351675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.351719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.351989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.352035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.352174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.352226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.352436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.352480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.352667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.352712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.352915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.352960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.353093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.353389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.353436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.353645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.353689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.353879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.353922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.354185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.354241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.354454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.354505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.354631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.354676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.354888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.354933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.355077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.355121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.355324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.355369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.355506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.355550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.355834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.355877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.356137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.356182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.356402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.356448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.356707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.356751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.357075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.357295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.357589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.357631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.357756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.357799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.357937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.358195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.358248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.358403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.358446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.358654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.358698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.358929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.358975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.359185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.359241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.359436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.359481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.359657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.359912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.360118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.360163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.360423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.036 [2024-11-06 15:43:55.360479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.036 qpair failed and we were unable to recover it. 00:39:28.036 [2024-11-06 15:43:55.360699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.360746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.360988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.361046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.361221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.361269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.361451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.361498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.361716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.361782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.361930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.361975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.362190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.362248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.362508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.362742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.362788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.362978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.363022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.363282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.363328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.363530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.363576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.363724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.363770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.363962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.364007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.364142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.364186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.364383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.364435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.364628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.364673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.364865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.364910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.365107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.365151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.365368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.365414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.365616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.365662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.365803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.365848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.366103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.366148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.366301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.366348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.366621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.366666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.366864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.366908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.367106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.367152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.367290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.367336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.367624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.367668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.367967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.368156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.368200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.368434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.368479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.368668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.368863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.368908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.369097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.369142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.369346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.037 [2024-11-06 15:43:55.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.037 qpair failed and we were unable to recover it. 00:39:28.037 [2024-11-06 15:43:55.369678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.369722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.369939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.369983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.370184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.370243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.370375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.370418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.370564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.370608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.371113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.371158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.371362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.371408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.371574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.371619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.371826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.371872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.372084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.372129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.372343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.372397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.372538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.372583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.372729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.372774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.373038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.373295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.373343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.373472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.373518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.373711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.374032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.374078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.374288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.374341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.374573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.374617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.374828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.374873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.375109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.375153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.375308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.375353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.375563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.375608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.375893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.375938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.376072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.376116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.376262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.376308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.376523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.376568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.376710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.376969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.377016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.377251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.377300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.377443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.377498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.377641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.377685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.377962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.378005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.378150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.378195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.038 [2024-11-06 15:43:55.378402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.038 [2024-11-06 15:43:55.378444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.038 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.378584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.378627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.378882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.378926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.379132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.379177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.379328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.379372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.379534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.379578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.379713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.379757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.379904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.379954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.380166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.380230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.380372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.380416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.380643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.380690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.380837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.380882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.381084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.381274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.381320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.381515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.381560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.381701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.381746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.381951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.381997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.382227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.382275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.382428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.382472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.382599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.382644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.382773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.382818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.382965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.383011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.383134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.383181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.383497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.383647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.383692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.383933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.384147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.384192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.384420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.384466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.384599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.384643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.384851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.384897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.385126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.385173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.039 qpair failed and we were unable to recover it. 00:39:28.039 [2024-11-06 15:43:55.385378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.039 [2024-11-06 15:43:55.385424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.385553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.385599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.385832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.385876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.386075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.386120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.386273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.386320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.386448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.386663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.386710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.386830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.386876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.387035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.387080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.387216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.387263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.387406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.387451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.387582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.387627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.387819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.387864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.388100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.388319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.388365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.388508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.388554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.388870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.388919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.389044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.389087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.389234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.389294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.389461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.389510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.389745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.389791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.390000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.390045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.390236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.390285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.390602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.390647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.390803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.390850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.391039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.391102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.391300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.391346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.391633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.391679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.391816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.391863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.392013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.392059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.392333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.392381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.392639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.392857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.392910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.393069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.393115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.393265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.393312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.040 [2024-11-06 15:43:55.393452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.040 [2024-11-06 15:43:55.393497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.040 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.393710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.393756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.393896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.393942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.394234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.394282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.394412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.394458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.394594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.394638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.394771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.394816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.395052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.395097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.395233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.395281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.395412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.395459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.395695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.395741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.395892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.395938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.396071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.396319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.396370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.396633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.396679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.396965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.397153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.397199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.397406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.397452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.397643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.397689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.397828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.397873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.398036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.398304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.398352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.398486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.398530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.398727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.398772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.398934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.398999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.399314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.399368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.399573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.399621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.399747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.399794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.400003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.400049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.400247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.400294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.400486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.400531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.400755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.400800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.400946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.400991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.401182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.401237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.401457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.401502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.401651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.401695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.401910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.401957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.041 qpair failed and we were unable to recover it. 00:39:28.041 [2024-11-06 15:43:55.402242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.041 [2024-11-06 15:43:55.402294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.402561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.402605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.402809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.402857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.403145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.403190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.403343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.403388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.403535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.403580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.403814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.404012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.404057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.404194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.404255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.404423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.404477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.404773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.404826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.405095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.405152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.405303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.405350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.405498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.405543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.405725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.405936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.405981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.406253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.406303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.406440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.406488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.406678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.406724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.407001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.407187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.407243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.407449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.407496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.407772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.407817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.407961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.408005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.408300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.408347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.408546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.408592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.408796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.408842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.409072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.409120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.409342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.409389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.409529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.409580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.409733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.409780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.409972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.410172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.410226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.410429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.410473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.410681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.410727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.410917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.410962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.042 [2024-11-06 15:43:55.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.042 [2024-11-06 15:43:55.411129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.042 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.411286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.411332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.411479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.411790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.411839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.412037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.412304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.412352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.412578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.412623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.412769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.412815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.413029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.413288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.413334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.413533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.413860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.413905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.414037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.414084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.414301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.414347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.414621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.414758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.414803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.415064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.415109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.415319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.415366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.415583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.415630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.415774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.415820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.416055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.416100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.416383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.416430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.416640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.416686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.416825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.416871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.417066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.417112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.417253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.417299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.417559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.417605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.417748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.417795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.417943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.417990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.418134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.418179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.418452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.418497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.418742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.418791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.418947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.418992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.419133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.419459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.419505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.419654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.419700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.419902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.043 [2024-11-06 15:43:55.419947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.043 qpair failed and we were unable to recover it. 00:39:28.043 [2024-11-06 15:43:55.420170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.420450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.420494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.420747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.420791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.421059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.421104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.421260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.421306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.421512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.421557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.421820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.421867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.422007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.422057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.422264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.422309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.422522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.422567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.422708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.422753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.422911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.422956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.423219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.423266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.423462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.423506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.423649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.423694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.423894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.423938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.424224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.424270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.424399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.424443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.424584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.424628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.424818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.424864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.425007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.425051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.425189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.425247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.425547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.425591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.425736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.425782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.425976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.426021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.426300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.426347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.426498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.426544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.426684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.426729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.426871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.426915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.427168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.427222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.427348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.044 [2024-11-06 15:43:55.427393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.044 qpair failed and we were unable to recover it. 00:39:28.044 [2024-11-06 15:43:55.427527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.427571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.427765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.427810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.428063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.428290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.428344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.428576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.428619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.428839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.428884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.429023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.429068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.429217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.429263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.429478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.429522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.429761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.429806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.429997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.430042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.430266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.430313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.430547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.430593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.430850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.430895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.431047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.431092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.431223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.431269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.431459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.431503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.431732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.431776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.431983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.432029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.432320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.432368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.432683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.432727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.432940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.432984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.433194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.433252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.433562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.433760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.433804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.434068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.434115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.434329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.434377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.434586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.434631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.434766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.434811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.434946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.434993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.435242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.435287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.435496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.435541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.435727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.435772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.436031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.436074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.436306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.436353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.436543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.436588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.045 qpair failed and we were unable to recover it. 00:39:28.045 [2024-11-06 15:43:55.436782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.045 [2024-11-06 15:43:55.436826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.436962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.437006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.437261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.437307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.437529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.437575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.437725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.437770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.437970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.438015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.438171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.438228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.438456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.438509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.438778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.438991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.439037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.439194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.439255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.439556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.439603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.439849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.439998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.440045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.440234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.440282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.440474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.440520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.440785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.440830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.441063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.441109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.441368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.441415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.441562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.441607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.441746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.441792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.441991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.442037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.442177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.442231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.442402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.442447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.442647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.442849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.442894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.443153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.443199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.443435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.443479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.443673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.443718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.443933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.443980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.444129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.444370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.444417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.444550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.444594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.444797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.444842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.445080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.445126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.445278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.445324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.445549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.046 [2024-11-06 15:43:55.445594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.046 qpair failed and we were unable to recover it. 00:39:28.046 [2024-11-06 15:43:55.445726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.445771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.445921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.445967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.446174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.446231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.446524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.446569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.446809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.446855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.446978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.447024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.447225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.447271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.447487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.447532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.447673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.447718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.448004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.448048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.448304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.448579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.448624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.448787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.448832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.449035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.449083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.449235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.449283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.449461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.449506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.449651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.449696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.449831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.449876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.450076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.450120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.450290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.450350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.450588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.450638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.450783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.450836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.451110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.451158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.451408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.451458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.451732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.451777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.451989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.452034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.452282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.452414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.452458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.452584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.452629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.452838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.452883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.453085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.453131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.453390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.453437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.453688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.453746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.453868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.453913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.454135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.454180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.454398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.454444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.047 [2024-11-06 15:43:55.454651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.047 [2024-11-06 15:43:55.454696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.047 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.454853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.454900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.455102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.455146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.455385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.455432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.455590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.455634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.455888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.455937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.456092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.456137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.456449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.456500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.456642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.456689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.456898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.456944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.457154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.457215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.457436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.457482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.457625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.457671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.457869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.457914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.458067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.458119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.458379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.458426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.458656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.458703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.458858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.459109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.459153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.459305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.459350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.459504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.459552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.459757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.459803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.460014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.460061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.460215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.460262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.460453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.460498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.460697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.460742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.460985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.461032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.461236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.461283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.461426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.461472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.461593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.461639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.461774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.461820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.461963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.462008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.462236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.462288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.462568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.462696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.462742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.462873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.463134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.048 [2024-11-06 15:43:55.463181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.048 qpair failed and we were unable to recover it. 00:39:28.048 [2024-11-06 15:43:55.463479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.463729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.463775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.463995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.464042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.464234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.464280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.464454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.464499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.464654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.464701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.464832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.464880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.465027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.465072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.465335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.465590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.465635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.465779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.465825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.465981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.466026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.466227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.466274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.466558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.466605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.466830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.466875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.467020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.467065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.467229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.467277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.467474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.467525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.467727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.467773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.467926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.467973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.468160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.468370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.468417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.468610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.468663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.468859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.468906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.469107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.469153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.469304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.469352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.469474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.469518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.469715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.469900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.469946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.470078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.470386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.470435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.049 [2024-11-06 15:43:55.470585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.049 [2024-11-06 15:43:55.470631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.049 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.470885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.470930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.471079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.471128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.471358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.471403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.471625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.471672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.471871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.471916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.472124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.472169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.472385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.472431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.472666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.472711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.472992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.473039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.473166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.473223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.473430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.473476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.473611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.473657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.473785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.473831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.473959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.474004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.474226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.474500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.474545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.474807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.474851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.475061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.475107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.475246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.475293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.475454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.475500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.475625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.475669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.475879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.475925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.476061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.476106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.476407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.476454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.476653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.476700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.476985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.477036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.477227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.477274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.477456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.477775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.477832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.478046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.478093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.478373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.478418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.478571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.478618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.478819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.478863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.479013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.479058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.479193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.479252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.479398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.050 [2024-11-06 15:43:55.479443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.050 qpair failed and we were unable to recover it. 00:39:28.050 [2024-11-06 15:43:55.479591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.479636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.479857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.479902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.480101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.480145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.480369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.480416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.480633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.480678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.480940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.481155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.481225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.481485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.481530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.481686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.481731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.481880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.481925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.482134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.482178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.482436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.482582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.482628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.482852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.482897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.483050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.483096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.483377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.483425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.483718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.483763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.483903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.483949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.484140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.484186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.484391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.484689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.484883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.484928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.485158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.485211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.485423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.485469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.485725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.485771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.485927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.485974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.486129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.486174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.486314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.486359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.486547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.486803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.486995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.487173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.487230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.487440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.487486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.487777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.487821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.488017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.488062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.488257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.488303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.051 qpair failed and we were unable to recover it. 00:39:28.051 [2024-11-06 15:43:55.488438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.051 [2024-11-06 15:43:55.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.488614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.488666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.488828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.488874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.489157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.489378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.489424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.489615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.489897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.489942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.490141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.490186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.490411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.490457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.490651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.490697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.490847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.490896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.491036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.491081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.491228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.491274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.491402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.491448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.491589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.491636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.491893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.491939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.492133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.492179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.492320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.492365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.492634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.492680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.492838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.492888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.493098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.493155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.493456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.493602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.493646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.493856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.493900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.494091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.494135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.494545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.494729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.494772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.494970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.495015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.495230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.495278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.495412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.495457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.495589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.495636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.495831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.495875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.496076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.496132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.496338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.496384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.496612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.496657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.052 [2024-11-06 15:43:55.496917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.052 qpair failed and we were unable to recover it. 00:39:28.052 [2024-11-06 15:43:55.497128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.497173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.497396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.497719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.497873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.497920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.498052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.498097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.498244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.498490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.498535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.498734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.498779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.498917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.498963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.499153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.499198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.499440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.499485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.499680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.499725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.499919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.499964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.500250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.500298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.500443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.500488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.500617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.500663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.500801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.500845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.500988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.501032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.501169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.501222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.501364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.501409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.501615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.501660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.501782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.501828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.502032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.502076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.502337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.502585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.502643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.502799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.502859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.503013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.503062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.503299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.503349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.503488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.503534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.503673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.503719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.503937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.503983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.504195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.504253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.504450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.504497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.504643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.504833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.504877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.505069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.505113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.053 qpair failed and we were unable to recover it. 00:39:28.053 [2024-11-06 15:43:55.505334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.053 [2024-11-06 15:43:55.505389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.505539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.505586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.505732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.505777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.505982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.506271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.506317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.506512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.506645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.506690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.506895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.506940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.507194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.507253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.507447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.507493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.507627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.507671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.507877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.507922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.508216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.508267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.508460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.508690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.508743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.508900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.508947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.509097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.509142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.509294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.509340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.509646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.509690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.509910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.509954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.510254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.510301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.510549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.510595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.510934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.510980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.511220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.511266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.511526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.511571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.511894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.511940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.512227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.512277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.512552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.512596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.512799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.512843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.513122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.513167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.054 qpair failed and we were unable to recover it. 00:39:28.054 [2024-11-06 15:43:55.513415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.054 [2024-11-06 15:43:55.513464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.513623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.513677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.513828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.513877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.514087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.514133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.514287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.514519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.514563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.514772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.514916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.514964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.515157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.515211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.515472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.515525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.515673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.515719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.515868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.515913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.516154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.516213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.516496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.516542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.516695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.516745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.516891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.516937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.517215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.517262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.517468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.517514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.517655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.517702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.517940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.517989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.518257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.518304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.518445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.518489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.518696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.518741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.519034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.519079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.519300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.519346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.519601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.519891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.520156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.520200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.520423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.520468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.520659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.520704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.520913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.520958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.521162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.521219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.521672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.521717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.521990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.522034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.522330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.522377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.522606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.055 [2024-11-06 15:43:55.522654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.055 qpair failed and we were unable to recover it. 00:39:28.055 [2024-11-06 15:43:55.522877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.522924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.523220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.523267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.523525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.523571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.523714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.523759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.523958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.524284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.524330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.524458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.524503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.524651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.524697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.524978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.525023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.525253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.525300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.525519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.525564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.525779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.525822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.526015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.526066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.526372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.526419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.526641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.526687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.526946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.527003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.527254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.527496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.527541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.527747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.527793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.528000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.528044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.528181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.528235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.528454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.528498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.528758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.528803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.529918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.529962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.530246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.530293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.530433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.530479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.530675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.530721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.530976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.531021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.531226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.531272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.531460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.531506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.531716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.531761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.056 qpair failed and we were unable to recover it. 00:39:28.056 [2024-11-06 15:43:55.531951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.056 [2024-11-06 15:43:55.531995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.532285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.532330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.532532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.532578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.532855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.532904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.533186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.533243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.533454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.533500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.533789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.533834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.534035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.534081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.534288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.534334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.534579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.534624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.534773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.534818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.534965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.535011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.535224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.535271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.535467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.535511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.535725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.535769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.535971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.536015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.536156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.536216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.536403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.536631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.536680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.536910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.536956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.537147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.537191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.537463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.537507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.537701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.537942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.537987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.538251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.538304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.538618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.538664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.538845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.538890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.539122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.539171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.539326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.539379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.539548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.539598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.539811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.539859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.540010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.540055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.540259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.540306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:28.057 [2024-11-06 15:43:55.540500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.540544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.540685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.540731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe8 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:39:28.057 0 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.057 [2024-11-06 15:43:55.540935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.057 [2024-11-06 15:43:55.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.057 qpair failed and we were unable to recover it. 00:39:28.058 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:28.058 [2024-11-06 15:43:55.541182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.541237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:28.058 [2024-11-06 15:43:55.541521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.058 [2024-11-06 15:43:55.541798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.541843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.541982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.542026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.542234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.542280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.542506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.542557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.542693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.542738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.542951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.542996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.543211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.543259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.543407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.543451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.543660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.543704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.543905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.543949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.544093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.544141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.544311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.544373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.544589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.544634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.544885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.545182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.545342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.545386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.545671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.545723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.545934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.545981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.546185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.546243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.546436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.546481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.546621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.546668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.546810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.546854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.546977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.547278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.547325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.547568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.547613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.547823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.547871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.548065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.548111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.548374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.548421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.058 [2024-11-06 15:43:55.548666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.058 [2024-11-06 15:43:55.548711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.058 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.548859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.548904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.549170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.549225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.549386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.549432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.549634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.549679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.549821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.549866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.550069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.550115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.550253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.550300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.550448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.550493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.550685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.550730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.550855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.550900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.551036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.551081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.551388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.551660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.551903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.551949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.552181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.552238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.552393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.552437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.552571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.552616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.552800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.552845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.553045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.553089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.553287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.553333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.553550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.553595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.553815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.553859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.553991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.554036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.554337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.554558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.554611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.554759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.554804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.555006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.555226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.555278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.555427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.555472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.555661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.555707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.555925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.555970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.556220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.556368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.556412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.556567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.556612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.556737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.556782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.556966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.059 [2024-11-06 15:43:55.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.059 qpair failed and we were unable to recover it. 00:39:28.059 [2024-11-06 15:43:55.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.557185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.557415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.557460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.557729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.557774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.557905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.557950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.558165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.558221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.558432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.558476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.558601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.558646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.558875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.559073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.559125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.559325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.559377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.559583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.559641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.559836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.559995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.560040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.560166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.560222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.560374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.560564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.560608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.560751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.560991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.561035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.561251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.561300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.561595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.561640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.561830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.561873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.561997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.562041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.562251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.562439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.562484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.562689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.562734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.562914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.563071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.563116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.563309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.563355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.563545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.563714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.563759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.563895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.563939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.564073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.564124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.564274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.564320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.564458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.564502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.564635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.564679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.564890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.060 [2024-11-06 15:43:55.564934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.060 qpair failed and we were unable to recover it. 00:39:28.060 [2024-11-06 15:43:55.565213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.565264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.565458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.565504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.565698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.565743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.565893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.565938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.566132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.566178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.566384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.566425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.566705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.566747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.566990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.567032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.567277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.567320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.567471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.567719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.567760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.567912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.567954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.568148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.568193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.568420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.568461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.568597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.568640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.568789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.568830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.568957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.568997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.569160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.569213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.569406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.569448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.569641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.569682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.569934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.569979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.570179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.570236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.570389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.570437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.570747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.570804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.570972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.571018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.571226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.571271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.571568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.571613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.571754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.571796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.571983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:28.061 [2024-11-06 15:43:55.572194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.572255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.572377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:28.061 [2024-11-06 15:43:55.572637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.572683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 [2024-11-06 15:43:55.572844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.572890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.061 [2024-11-06 15:43:55.573085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.573143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.061 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.061 [2024-11-06 15:43:55.573303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.061 [2024-11-06 15:43:55.573352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.061 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.573584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.573629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.573954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.574192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.574250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.574382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.574565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.574609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.574755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.574799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.575012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.575064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.575220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.575272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.575466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.575512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.575694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.575833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.575878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.576022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.576067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.576218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.576265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.576386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.576574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.576619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.576739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.576783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.577002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.577047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.577270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.577325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.577548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.577596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.577737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.577786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.577980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.578025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.578176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.578247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.578376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.578422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.578545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.578588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.578777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.578828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.579037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.579082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.579284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.579329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.579520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.579565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.579707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.579751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.579875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.579920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.580070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.580115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.580263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.580309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.580445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.580490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.580746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.580981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.062 [2024-11-06 15:43:55.581027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.062 qpair failed and we were unable to recover it. 00:39:28.062 [2024-11-06 15:43:55.581225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.581271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.581405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.581449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.581586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.581630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.581777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.581826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.581966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.582016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.582171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.582230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.582360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.582405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.582584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.582629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.582775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.582819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.583008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.583310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.583449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.583492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.583689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.583734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.583864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.583909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.584076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.584279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.584479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.584655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.584841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.584960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.585246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.585293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.585427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.585473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.585594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.585639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.585832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.585876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.586080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.586124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.586253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.586298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.586424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.586468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.586591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.586635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.586761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.586806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.587004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.587062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.587266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.587315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.587444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.063 [2024-11-06 15:43:55.587489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.063 qpair failed and we were unable to recover it. 00:39:28.063 [2024-11-06 15:43:55.587755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.587800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.588010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.588054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.588260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.588306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.588505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.588549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.588763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.588808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.588952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.588997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.589199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.589254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.589386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.589431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.589692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.589738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.590046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.590237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.590501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.590672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.590859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.590991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.591037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.591173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.591449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.591494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.591684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.591729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.591862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.591909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.592104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.592149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.592365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.592417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.592593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.592639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.592843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.592938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.593199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.593258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.593511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.593558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.593722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.593770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.594028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.594159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.594214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.594377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.594423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.594643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.594689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.594943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.594987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.595195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.595262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.595390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.595436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.595663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.595709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.595843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.595890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.596099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.596147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.064 qpair failed and we were unable to recover it. 00:39:28.064 [2024-11-06 15:43:55.596309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.064 [2024-11-06 15:43:55.596358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.596490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.596540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.596670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.596713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.596853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.596896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.597043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.597091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.597318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.597364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.597570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.597616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.597741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.597789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.597979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.598024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.598250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.598294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.598458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.598504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.598643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.598687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.598823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.598865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.599000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.599046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.599372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.599509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.599555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.599771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.599816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.600030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.600077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.600233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.600281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.600427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.600473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.600755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.600803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.600939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.600986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.601142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.601189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.601419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.601463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.601657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.601704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.602008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.602058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.602322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.602370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.602613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.602658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.602816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.602873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.603031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.603327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.603384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.603619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.603677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.603899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.603963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.604183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.604243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.604559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.604865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.604910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.605043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.065 [2024-11-06 15:43:55.605088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.065 qpair failed and we were unable to recover it. 00:39:28.065 [2024-11-06 15:43:55.605285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.605335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.605627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.605671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.605940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.605985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.606271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.606318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.606516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.606568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.606784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.606830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.607031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.607075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.607221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.607264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.607521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.607567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.607714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.607758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.607988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.608033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.608227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.608273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.608409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.608456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.608599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.608657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.608894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.608938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.609079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.609123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.609307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.609354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.609561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.609605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.609818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.609863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.610055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.610182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.610240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.610429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.610475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.610675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.611030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.611074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.611276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.611324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.611468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.611512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.611725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.611770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.611977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.612162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.612220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.612362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.612406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.612535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.612579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.612725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.612770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.613039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.613084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.613355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.613402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.613602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.613648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.066 [2024-11-06 15:43:55.613850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.066 [2024-11-06 15:43:55.613895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.066 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.614037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.614082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.614230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.614275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.614415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.614461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.614655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.614700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.614898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.614943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.615172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.615433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.615478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.615601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.615644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.615858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.615908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.616030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.616075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.616227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.616273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.616468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.616512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.616743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.616875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.616918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.617171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.617225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.617418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.617463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.617584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.617628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.617882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.617926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.618170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.618224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.618498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.618544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.618744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.619023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.619067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.619337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.619385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.619579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.619623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.619818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.620169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.620470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.620515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.620640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.620685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.620822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.620867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.621067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.621111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.621251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.621297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.621439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.621484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.621741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.621968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.622013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.622317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.622363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.067 [2024-11-06 15:43:55.622505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.067 [2024-11-06 15:43:55.622550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.067 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.622701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.622745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.622980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.623024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.623147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.623191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.623416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.623460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.623598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.623643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.623924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.623980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.624219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.624265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.624520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.624565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.624776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.624820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.625027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.625071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.625193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.625250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.625451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.625496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.625688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.625739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.626059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.626217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.626263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.626474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.626519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.626656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.626700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.626819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.626864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.627067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.627113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.627307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.627354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.627566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.627611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.627800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.627844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.627969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.628014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.628139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.628183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.628344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.628389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.628590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.628636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.628848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.628894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.629027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.629072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.629347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.629653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.629698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.629886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.629930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.068 [2024-11-06 15:43:55.630106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.068 [2024-11-06 15:43:55.630149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.068 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.630354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.630610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.630654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.630893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.630937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.631198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.631263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.631387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.631433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.631666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.631710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.631919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.631963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.632094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.632139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.632411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.632457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.632658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.632702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.632891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.632936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.633137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.633182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.633398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.633444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.633652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.633696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.633838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.633883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.634113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.634163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.634414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.634466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.634743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.634787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.634929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.634974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.635100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.635145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.635351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.635405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.635604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.635649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.635803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.635847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.636050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.636094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.636299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.636346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.636550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.636595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.636843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.636887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.637094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.637138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.637430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.637476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.637671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.637715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.637914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.638164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.638218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.638504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.638549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.638749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.638793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.639029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.639074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.639355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.069 [2024-11-06 15:43:55.639402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.069 qpair failed and we were unable to recover it. 00:39:28.069 [2024-11-06 15:43:55.639599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.639644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.639914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.639958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.640160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.640217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.640519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.640563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.640844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.640889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.641036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.641079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.641290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.641336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.641547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.641591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.641843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.641890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.642085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.642141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.642420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.642464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.642731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.642774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.642913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.643169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.643219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.643428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.643472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.643766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.643808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.644020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.644063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.644344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.644388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.644531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.644574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.644781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.644824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.645046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.645088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.645290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.645336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.645617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.645659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.645849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.645892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.646150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.646199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.646434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.646478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.646674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.646717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.646858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.646901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.647186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.647240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.647568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.647612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.647766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.647808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.647948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.647991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.648188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.648242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.648471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.648514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.648696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.648740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.648930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.648973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.070 [2024-11-06 15:43:55.649255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.070 [2024-11-06 15:43:55.649300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.070 qpair failed and we were unable to recover it. 00:39:28.071 [2024-11-06 15:43:55.649498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.071 [2024-11-06 15:43:55.649540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.071 qpair failed and we were unable to recover it. 00:39:28.071 [2024-11-06 15:43:55.649753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.071 [2024-11-06 15:43:55.649797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.071 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.650052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.650095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.650318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.650364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.650767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.650810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.651003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.651046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.651234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.651279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.651473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.651515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.651722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.651765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.651978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.652022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.333 qpair failed and we were unable to recover it. 00:39:28.333 [2024-11-06 15:43:55.652230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.333 [2024-11-06 15:43:55.652274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.652529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.652572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.652716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.652761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.652919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.652979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.653297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.653346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.653480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.653524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.653758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.653802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.654001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.654089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.654362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.654408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.654679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.654882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.654927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.655061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.655106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.655318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.655365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.655570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.655825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.655869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.656009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.656053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.656313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.656364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.656628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.656672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.656795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.656839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.657093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.657137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.657359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.657404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.657596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.657640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.657847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.657891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.658098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.658142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.658377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.658422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.658684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.658741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.658945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.658989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.659214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.659259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.659468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.659513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.659739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.659783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.659919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.659963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.660240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.660287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.660490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.660535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.660767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.660812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.661020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.661064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.661297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.661342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.661469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.334 [2024-11-06 15:43:55.661513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.334 qpair failed and we were unable to recover it. 00:39:28.334 [2024-11-06 15:43:55.661706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.661750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.661889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.661934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.662156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.662221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.662445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.662491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 Malloc0 00:39:28.335 [2024-11-06 15:43:55.662773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.662819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.663025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.663070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:39:28.335 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:28.335 [2024-11-06 15:43:55.663334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.335 [2024-11-06 15:43:55.663404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.335 [2024-11-06 15:43:55.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.663772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.664009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.664055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.664342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.664389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.664680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.664724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.664917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.664962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.665172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.665228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.665502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.665544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.665777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.665823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.666057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.666101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.666253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.666300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.666461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:28.335 [2024-11-06 15:43:55.666563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.666613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.666767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.666810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.666988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.667225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.667269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.667529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.667573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.667713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.667757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.667968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.668013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.668242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.668288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.668438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.668482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.668630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.668674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.668880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.668923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.669066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.669110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.669246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.669292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.669482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.669526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.669842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.669887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.670172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.670225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.670450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.335 [2024-11-06 15:43:55.670496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.335 qpair failed and we were unable to recover it. 00:39:28.335 [2024-11-06 15:43:55.670692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.670736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.670952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.670996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.671146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.671191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.671467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.671512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.671795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.671839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.672049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.672093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.672255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.672590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.672634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.672829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.672873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.673083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.673128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.673335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.673382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.673590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.673635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.673876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.674154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.674198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.674416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.674461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.674612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.674657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.336 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:28.336 [2024-11-06 15:43:55.674940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.336 [2024-11-06 15:43:55.674983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.336 [2024-11-06 15:43:55.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.675290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.675569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.675614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.675829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.675874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.676078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.676323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.676375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.676659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.676709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.676920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.676963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.677170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.677242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.677459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.677504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.677737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.677779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.677972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.678276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.678321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.678581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.678625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.678773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.678817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.678961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.679006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.679300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.679346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.679625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.679668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.336 [2024-11-06 15:43:55.679871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.336 [2024-11-06 15:43:55.679915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.336 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.680122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.680165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.680435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.680513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.680694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.680744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.681002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.681221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.681267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.681501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.681546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.681808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.681852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.682106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.682151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.682508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.682565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.337 [2024-11-06 15:43:55.682786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.682830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:28.337 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.337 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.337 [2024-11-06 15:43:55.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.683129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.683428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.683484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.683765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.683808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.683963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.684006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.684169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.684225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.684421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.684464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.684691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.684734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.684897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.684942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.685227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.685272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.685472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.685515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.685801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.685845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.685983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.686025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.686221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.686266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.686466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.686511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.686797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.686840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.687124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.687168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.687461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.687506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.687709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.687754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.688055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.688099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.688326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.688372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.688566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.688611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.688805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.688849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.689059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.689104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.689298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.337 [2024-11-06 15:43:55.689504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.337 [2024-11-06 15:43:55.689548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.337 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.689720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.689764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.689915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.689958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.690191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.690404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.690449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.690596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.690639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.338 [2024-11-06 15:43:55.690831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.690876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.338 [2024-11-06 15:43:55.691189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.691243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.691466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.691510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.691717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.691760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.691965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.692009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.692167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.692220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.692382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.692427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.692700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.692743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.692870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.692914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.693113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.693164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.693427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.693473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.693610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.693654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.693802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.693845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.694109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.694154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.694315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.694360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.694577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.338 [2024-11-06 15:43:55.694620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032eb80 with addr=10.0.0.2, port=4420 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 [2024-11-06 15:43:55.694748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.338 [2024-11-06 15:43:55.697958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.338 [2024-11-06 15:43:55.698108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.338 [2024-11-06 15:43:55.698183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.338 [2024-11-06 15:43:55.698236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.338 [2024-11-06 15:43:55.698266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.338 [2024-11-06 15:43:55.698340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.338 qpair failed and we were unable to recover it. 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.338 15:43:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4109724 00:39:28.338 [2024-11-06 15:43:55.707996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.338 [2024-11-06 15:43:55.708113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.338 [2024-11-06 15:43:55.708165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.338 [2024-11-06 15:43:55.708194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.338 [2024-11-06 15:43:55.708232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.708284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.717907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.718060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.718091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.718110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.718125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.718161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.728035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.728123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.728147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.728160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.728170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.728207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.737882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.737967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.737990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.738001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.738010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.738033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.747944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.748046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.748069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.748083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.748092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.748114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.758052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.758136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.758158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.758170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.758178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.758208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.767933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.768017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.768038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.768051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.768060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.768081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.778027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.778103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.778126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.778138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.778147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.778169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.788127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.788218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.788241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.788253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.788263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.788288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.798104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.798182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.798209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.798222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.798232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.798255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.808165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.808254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.808275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.808287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.808297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.808320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.818127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.818211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.818234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.818246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.818255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.818277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.828207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.828292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.828313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.339 [2024-11-06 15:43:55.828324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.339 [2024-11-06 15:43:55.828333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.339 [2024-11-06 15:43:55.828356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.339 qpair failed and we were unable to recover it. 00:39:28.339 [2024-11-06 15:43:55.838219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.339 [2024-11-06 15:43:55.838323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.339 [2024-11-06 15:43:55.838345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.838357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.838366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.838388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.848245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.848376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.848397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.848409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.848418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.848440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.858215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.858292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.858315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.858326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.858335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.858356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.868163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.868246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.868269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.868281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.868290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.868311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.878257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.878336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.878359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.878373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.878383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.878407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.888360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.888439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.888462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.888474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.888483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.888504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.898285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.898365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.898387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.898399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.898410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.898433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.908348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.908426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.908448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.908460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.908475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.908498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.918433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.918508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.918530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.918542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.918550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.918575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.928414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.928545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.928567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.928580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.928588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.928610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.938454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.938530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.938552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.938563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.938572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.938594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.948450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.948524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.948546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.948558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.948567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.948589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.340 [2024-11-06 15:43:55.958553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.340 [2024-11-06 15:43:55.958640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.340 [2024-11-06 15:43:55.958662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.340 [2024-11-06 15:43:55.958674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.340 [2024-11-06 15:43:55.958682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.340 [2024-11-06 15:43:55.958704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.340 qpair failed and we were unable to recover it. 00:39:28.601 [2024-11-06 15:43:55.968531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.601 [2024-11-06 15:43:55.968651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.601 [2024-11-06 15:43:55.968672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.601 [2024-11-06 15:43:55.968684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.601 [2024-11-06 15:43:55.968693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.601 [2024-11-06 15:43:55.968715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.601 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:55.978621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:55.978703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:55.978725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:55.978737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:55.978746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:55.978768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:55.988564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:55.988674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:55.988696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:55.988708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:55.988717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:55.988738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:55.998630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:55.998706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:55.998728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:55.998740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:55.998748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:55.998770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.008624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.008709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.008736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.008748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.008758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.008780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.018793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.018873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.018896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.018908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.018917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.018939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.028729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.028809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.028832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.028844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.028853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.028875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.038803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.038882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.038904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.038916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.038926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.038948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.048746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.048875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.048897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.048908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.048920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.048942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.058868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.058943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.058965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.058976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.058985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.059011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.068761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.068848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.068869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.068881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.068889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.068911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.078860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.078935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.078957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.078968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.078977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.078999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.602 [2024-11-06 15:43:56.088860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.602 [2024-11-06 15:43:56.088936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.602 [2024-11-06 15:43:56.088958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.602 [2024-11-06 15:43:56.088970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.602 [2024-11-06 15:43:56.088979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.602 [2024-11-06 15:43:56.089000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.602 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.098977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.099075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.099097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.099108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.099117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.099139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.108993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.109076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.109097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.109109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.109118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.109141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.119078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.119153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.119176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.119188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.119197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.119227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.129027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.129105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.129128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.129140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.129149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.129171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.139039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.139119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.139144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.139156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.139164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.139185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.149028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.149109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.149132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.149143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.149152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.149174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.159064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.159143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.159165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.159177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.159185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.159213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.169091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.169184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.169223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.169235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.169244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.169267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.179263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.179364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.179386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.179398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.179412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.179435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.189241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.189321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.189343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.189355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.189364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.189386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.199204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.199281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.199303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.199314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.199323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.199345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.209146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.209228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.209250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.603 [2024-11-06 15:43:56.209262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.603 [2024-11-06 15:43:56.209271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.603 [2024-11-06 15:43:56.209293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.603 qpair failed and we were unable to recover it. 00:39:28.603 [2024-11-06 15:43:56.219368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.603 [2024-11-06 15:43:56.219457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.603 [2024-11-06 15:43:56.219479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.604 [2024-11-06 15:43:56.219490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.604 [2024-11-06 15:43:56.219499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.604 [2024-11-06 15:43:56.219521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.604 qpair failed and we were unable to recover it. 00:39:28.604 [2024-11-06 15:43:56.229395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.604 [2024-11-06 15:43:56.229477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.604 [2024-11-06 15:43:56.229498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.604 [2024-11-06 15:43:56.229510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.604 [2024-11-06 15:43:56.229519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.604 [2024-11-06 15:43:56.229540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.604 qpair failed and we were unable to recover it. 00:39:28.863 [2024-11-06 15:43:56.239377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.863 [2024-11-06 15:43:56.239482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.863 [2024-11-06 15:43:56.239504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.863 [2024-11-06 15:43:56.239515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.863 [2024-11-06 15:43:56.239525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.863 [2024-11-06 15:43:56.239546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.863 qpair failed and we were unable to recover it. 00:39:28.863 [2024-11-06 15:43:56.249338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.863 [2024-11-06 15:43:56.249437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.863 [2024-11-06 15:43:56.249459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.863 [2024-11-06 15:43:56.249470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.863 [2024-11-06 15:43:56.249479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.863 [2024-11-06 15:43:56.249501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.863 qpair failed and we were unable to recover it. 00:39:28.863 [2024-11-06 15:43:56.259434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.863 [2024-11-06 15:43:56.259536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.863 [2024-11-06 15:43:56.259558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.863 [2024-11-06 15:43:56.259569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.863 [2024-11-06 15:43:56.259578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.863 [2024-11-06 15:43:56.259601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.863 qpair failed and we were unable to recover it. 00:39:28.863 [2024-11-06 15:43:56.269456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.863 [2024-11-06 15:43:56.269545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.863 [2024-11-06 15:43:56.269567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.863 [2024-11-06 15:43:56.269579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.863 [2024-11-06 15:43:56.269588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.863 [2024-11-06 15:43:56.269610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.863 qpair failed and we were unable to recover it. 00:39:28.863 [2024-11-06 15:43:56.279456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.863 [2024-11-06 15:43:56.279530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.279552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.279563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.279572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.279594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.289475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.289554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.289576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.289587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.289596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.289617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.299554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.299655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.299676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.299687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.299696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.299718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.309609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.309704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.309724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.309738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.309747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.309769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.319508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.319584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.319605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.319617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.319626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.319647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.329543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.329624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.329646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.329658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.329667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.329688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.339690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.339783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.339806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.339817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.339825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.339848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.349656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.349729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.349752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.349764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.349773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.349798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.359635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.359763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.359785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.359797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.359805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.359827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.369658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.369742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.369764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.369776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.369785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.369807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.379673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.379763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.379785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.379796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.379805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.379827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.389755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.389855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.389876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.864 [2024-11-06 15:43:56.389887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.864 [2024-11-06 15:43:56.389896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.864 [2024-11-06 15:43:56.389921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.864 qpair failed and we were unable to recover it. 00:39:28.864 [2024-11-06 15:43:56.399745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.864 [2024-11-06 15:43:56.399833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.864 [2024-11-06 15:43:56.399855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.399867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.399875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.399897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.409734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.409816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.409838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.409849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.409859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.409881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.419890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.419970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.419992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.420003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.420012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.420041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.429917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.429987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.430008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.430020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.430029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.430050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.439880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.439973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.439995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.440009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.440018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.440040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.449916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.449997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.450018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.450030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.450039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.450060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.459952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.460030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.460051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.460063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.460072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.460093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.470008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.470088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.470109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.470121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.470130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.470152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.479955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.480033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.480055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.480068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.480077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.480108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:28.865 [2024-11-06 15:43:56.489983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.865 [2024-11-06 15:43:56.490108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.865 [2024-11-06 15:43:56.490130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.865 [2024-11-06 15:43:56.490141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.865 [2024-11-06 15:43:56.490151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:28.865 [2024-11-06 15:43:56.490173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:28.865 qpair failed and we were unable to recover it. 00:39:29.124 [2024-11-06 15:43:56.500076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.124 [2024-11-06 15:43:56.500161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.124 [2024-11-06 15:43:56.500185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.124 [2024-11-06 15:43:56.500197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.124 [2024-11-06 15:43:56.500215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.124 [2024-11-06 15:43:56.500238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.124 qpair failed and we were unable to recover it. 00:39:29.124 [2024-11-06 15:43:56.510073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.124 [2024-11-06 15:43:56.510160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.124 [2024-11-06 15:43:56.510182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.124 [2024-11-06 15:43:56.510194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.124 [2024-11-06 15:43:56.510208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.124 [2024-11-06 15:43:56.510233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.124 qpair failed and we were unable to recover it. 00:39:29.124 [2024-11-06 15:43:56.520096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.124 [2024-11-06 15:43:56.520172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.124 [2024-11-06 15:43:56.520194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.124 [2024-11-06 15:43:56.520211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.124 [2024-11-06 15:43:56.520221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.520242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.530244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.530323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.530346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.530358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.530367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.530388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.540194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.540275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.540296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.540308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.540317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.540340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.550178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.550266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.550288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.550299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.550308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.550330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.560268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.560349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.560371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.560382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.560391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.560412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.570272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.570354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.570379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.570391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.570400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.570421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.580286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.580392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.580414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.580425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.580434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.580456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.590367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.590453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.590475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.590486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.590495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.590518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.600363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.600436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.600457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.600469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.600478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.600500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.610368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.610475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.610497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.610509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.610523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.610546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.620432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.620567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.620594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.620607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.620617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.620639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.630486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.630577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.630599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.125 [2024-11-06 15:43:56.630611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.125 [2024-11-06 15:43:56.630620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.125 [2024-11-06 15:43:56.630642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.125 qpair failed and we were unable to recover it. 00:39:29.125 [2024-11-06 15:43:56.640478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.125 [2024-11-06 15:43:56.640560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.125 [2024-11-06 15:43:56.640583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.640594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.640603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.640625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.650489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.650567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.650589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.650601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.650609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.650632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.660717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.660796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.660819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.660830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.660839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.660860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.670541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.670617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.670639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.670650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.670660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.670682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.680619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.680692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.680713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.680731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.680740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.680763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.690654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.690732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.690755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.690767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.690775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.690798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.700691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.700773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.700802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.700813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.700822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.700844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.710736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.710825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.710850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.710862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.710872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.710894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.720733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.720930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.720952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.720964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.720974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.721004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.730674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.730751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.730773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.730785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.730794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.730816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.740683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.740757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.740779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.740791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.740803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.740826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.126 [2024-11-06 15:43:56.750941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.126 [2024-11-06 15:43:56.751042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.126 [2024-11-06 15:43:56.751066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.126 [2024-11-06 15:43:56.751078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.126 [2024-11-06 15:43:56.751089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.126 [2024-11-06 15:43:56.751112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.126 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.760796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.760875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.760897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.760910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.386 [2024-11-06 15:43:56.760919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.386 [2024-11-06 15:43:56.760941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.386 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.770829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.770905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.770927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.770938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.386 [2024-11-06 15:43:56.770947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.386 [2024-11-06 15:43:56.770969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.386 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.780971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.781049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.781071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.781084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.386 [2024-11-06 15:43:56.781093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.386 [2024-11-06 15:43:56.781115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.386 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.790957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.791039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.791061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.791073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.386 [2024-11-06 15:43:56.791082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.386 [2024-11-06 15:43:56.791104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.386 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.800980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.801060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.801082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.801094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.386 [2024-11-06 15:43:56.801103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.386 [2024-11-06 15:43:56.801125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.386 qpair failed and we were unable to recover it. 00:39:29.386 [2024-11-06 15:43:56.810980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.386 [2024-11-06 15:43:56.811067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.386 [2024-11-06 15:43:56.811089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.386 [2024-11-06 15:43:56.811100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.811109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.811131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.821088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.821177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.821198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.821215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.821224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.821247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.831021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.831128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.831150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.831161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.831171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.831192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.841107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.841183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.841210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.841222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.841231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.841253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.851044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.851124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.851146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.851157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.851166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.851188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.861194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.861297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.861319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.861330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.861339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.861361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.871119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.871196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.871222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.871238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.871247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.871270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.881164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.881245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.881268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.881280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.881288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.881310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.891239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.891335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.891357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.891368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.891377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.891400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.901182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.901262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.901284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.901296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.901305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.901327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.911242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.911373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.911395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.911406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.911415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.911440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.921309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.921409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.921433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.921445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.921455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.921477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.387 [2024-11-06 15:43:56.931272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.387 [2024-11-06 15:43:56.931346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.387 [2024-11-06 15:43:56.931368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.387 [2024-11-06 15:43:56.931379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.387 [2024-11-06 15:43:56.931388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.387 [2024-11-06 15:43:56.931410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.387 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.941418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.941492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.941513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.941525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.941534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.941557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.951383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.951462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.951485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.951496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.951506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.951528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.961366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.961451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.961473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.961485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.961494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.961517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.971365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.971443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.971465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.971477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.971486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.971507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.981552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.981646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.981668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.981680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.981688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.981711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:56.991448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:56.991527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:56.991549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:56.991560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:56.991569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:56.991591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:57.001626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:57.001716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:57.001742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:57.001755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:57.001765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:57.001787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.388 [2024-11-06 15:43:57.011570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.388 [2024-11-06 15:43:57.011651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.388 [2024-11-06 15:43:57.011673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.388 [2024-11-06 15:43:57.011685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.388 [2024-11-06 15:43:57.011694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.388 [2024-11-06 15:43:57.011716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.388 qpair failed and we were unable to recover it. 00:39:29.647 [2024-11-06 15:43:57.021618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.647 [2024-11-06 15:43:57.021693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.647 [2024-11-06 15:43:57.021716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.021728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.021737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.021759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.031546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.031631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.031653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.031665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.031673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.031695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.041584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.041662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.041683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.041695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.041703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.041728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.051613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.051689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.051711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.051723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.051731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.051757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.061692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.061769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.061791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.061803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.061812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.061833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.071710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.071782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.071804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.071816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.071825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.071846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.081723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.081805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.081827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.081839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.081848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.081870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.091759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.091834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.091855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.091868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.091877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.091898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.101879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.102003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.102025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.102036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.102045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.102067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.111833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.111915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.111937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.111949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.111958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.111980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.121813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.121896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.121918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.121930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.121939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.121961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.131859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.131939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.131963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.648 [2024-11-06 15:43:57.131975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.648 [2024-11-06 15:43:57.131984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.648 [2024-11-06 15:43:57.132006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.648 qpair failed and we were unable to recover it. 00:39:29.648 [2024-11-06 15:43:57.141959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.648 [2024-11-06 15:43:57.142035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.648 [2024-11-06 15:43:57.142056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.142068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.142077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.142099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.151970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.152047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.152070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.152082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.152091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.152113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.161926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.161999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.162021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.162032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.162041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.162063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.172025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.172121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.172142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.172154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.172167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.172189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.182081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.182184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.182213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.182226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.182234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.182257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.192054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.192126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.192147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.192159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.192174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.192196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.202110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.202220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.202241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.202253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.202262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.202284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.212090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.212181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.212207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.212219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.212228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.212250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.222127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.222208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.222230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.222242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.222251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.222273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.232160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.232241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.232263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.232275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.232283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.232305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.242227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.242302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.242324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.242335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.242344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.242366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.252158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.252256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.252278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.252289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.252298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.252322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.262268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.262344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.649 [2024-11-06 15:43:57.262368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.649 [2024-11-06 15:43:57.262380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.649 [2024-11-06 15:43:57.262389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.649 [2024-11-06 15:43:57.262411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.649 qpair failed and we were unable to recover it. 00:39:29.649 [2024-11-06 15:43:57.272272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.649 [2024-11-06 15:43:57.272381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.650 [2024-11-06 15:43:57.272403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.650 [2024-11-06 15:43:57.272414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.650 [2024-11-06 15:43:57.272424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.650 [2024-11-06 15:43:57.272446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.650 qpair failed and we were unable to recover it. 00:39:29.650 [2024-11-06 15:43:57.282308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.650 [2024-11-06 15:43:57.282383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.650 [2024-11-06 15:43:57.282404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.650 [2024-11-06 15:43:57.282416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.650 [2024-11-06 15:43:57.282425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.650 [2024-11-06 15:43:57.282448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.650 qpair failed and we were unable to recover it. 00:39:29.910 [2024-11-06 15:43:57.292368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.910 [2024-11-06 15:43:57.292465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.910 [2024-11-06 15:43:57.292487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.910 [2024-11-06 15:43:57.292499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.910 [2024-11-06 15:43:57.292508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.910 [2024-11-06 15:43:57.292530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.302434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.302515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.302536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.302551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.302560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.302583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.312401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.312478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.312500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.312511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.312521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.312542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.322417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.322534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.322555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.322567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.322576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.322598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.332437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.332520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.332542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.332554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.332562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.332584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.342486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.342564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.342586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.342598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.342607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.342629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.352480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.352557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.352579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.352591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.352599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.352622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.362493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.362574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.362595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.362607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.362615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.362637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.372564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.372644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.372665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.372677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.372686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.372707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.382617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.382695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.382717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.382729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.382738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.382764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.392633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.392711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.392733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.392745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.392753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.392775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.402700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.402775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.402797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.402808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.402817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.402838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.911 [2024-11-06 15:43:57.412680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.911 [2024-11-06 15:43:57.412761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.911 [2024-11-06 15:43:57.412783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.911 [2024-11-06 15:43:57.412794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.911 [2024-11-06 15:43:57.412803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.911 [2024-11-06 15:43:57.412825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.911 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.422769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.422845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.422866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.422878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.422886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.422909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.432852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.432926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.432947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.432961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.432971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.432994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.442829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.442912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.442934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.442945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.442955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.442976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.452809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.452886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.452913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.452925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.452934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.452957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.462790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.462868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.462890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.462902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.462910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.462932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.472846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.472964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.472986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.472997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.473006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.473032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.482916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.482999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.483022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.483033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.483042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.483065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.492968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.493045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.493067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.493079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.493088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.493110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.502913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.502985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.503007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.503020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.503028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.503050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.513034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.513113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.513135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.513146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.513155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.513177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.523018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.523097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.912 [2024-11-06 15:43:57.523118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.912 [2024-11-06 15:43:57.523130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.912 [2024-11-06 15:43:57.523139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.912 [2024-11-06 15:43:57.523161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.912 qpair failed and we were unable to recover it. 00:39:29.912 [2024-11-06 15:43:57.532979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.912 [2024-11-06 15:43:57.533057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.913 [2024-11-06 15:43:57.533079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.913 [2024-11-06 15:43:57.533090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.913 [2024-11-06 15:43:57.533100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.913 [2024-11-06 15:43:57.533121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.913 qpair failed and we were unable to recover it. 00:39:29.913 [2024-11-06 15:43:57.543055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.913 [2024-11-06 15:43:57.543131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.913 [2024-11-06 15:43:57.543152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.913 [2024-11-06 15:43:57.543164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.913 [2024-11-06 15:43:57.543174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:29.913 [2024-11-06 15:43:57.543196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:29.913 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-06 15:43:57.553116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-06 15:43:57.553206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-06 15:43:57.553229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-06 15:43:57.553241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-06 15:43:57.553251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.171 [2024-11-06 15:43:57.553273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-06 15:43:57.563198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-06 15:43:57.563279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-06 15:43:57.563304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-06 15:43:57.563316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-06 15:43:57.563325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.171 [2024-11-06 15:43:57.563348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-06 15:43:57.573166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-06 15:43:57.573249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-06 15:43:57.573270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-06 15:43:57.573282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-06 15:43:57.573292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.171 [2024-11-06 15:43:57.573322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-06 15:43:57.583226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-06 15:43:57.583301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-06 15:43:57.583323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-06 15:43:57.583335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-06 15:43:57.583345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.171 [2024-11-06 15:43:57.583368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.171 [2024-11-06 15:43:57.593240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.171 [2024-11-06 15:43:57.593335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.171 [2024-11-06 15:43:57.593358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.171 [2024-11-06 15:43:57.593369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.171 [2024-11-06 15:43:57.593377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.171 [2024-11-06 15:43:57.593400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.171 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.603275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.603349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.603370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.603383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.603392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.603416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.613215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.613291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.613313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.613324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.613334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.613356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.623382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.623491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.623514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.623527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.623537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.623559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.633322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.633401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.633423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.633434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.633444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.633465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.643378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.643450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.643473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.643485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.643495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.643517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.653431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.653511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.653533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.653546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.653556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.653578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.663513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.663598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.663620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.663633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.663644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.663667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.673447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.673522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.673544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.673557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.673568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.673592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.683503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.683578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.683601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.683615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.683626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.683650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.693617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.693691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.693716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.693729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.693739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.693762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.703545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.703619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.703643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.703656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.703667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.703699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.713694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.713774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.713796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.713809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.713820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.713846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.723585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.723662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.723685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.723698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.723709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.723732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.733725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.733805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.733829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.733842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.733856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.733880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.743705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.743777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.743801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.743815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.743825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.743849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.753777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.753870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.753893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.753906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.753917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.753939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.763724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.763823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.763845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.763858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.763868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.763891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.773834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.773910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.773932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.773946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.773956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.773980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.783795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.783869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.783892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.783905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.783915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.783938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.793839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.793912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.793934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.793947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.793957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.793979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.172 [2024-11-06 15:43:57.803853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.172 [2024-11-06 15:43:57.803932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.172 [2024-11-06 15:43:57.803955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.172 [2024-11-06 15:43:57.803967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.172 [2024-11-06 15:43:57.803977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.172 [2024-11-06 15:43:57.804001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.172 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.813996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.814073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.814096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.814109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.814120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.814145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.823909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.823997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.824024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.824039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.824051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.824076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.833850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.833963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.833986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.833999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.834009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.834033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.843986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.844061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.844084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.844096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.844106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.844130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.854036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.854135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.854158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.854171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.854181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.854209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.863995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.864107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.864130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.864146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.864156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.864179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.874064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.874145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.874167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.874180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.874190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.874222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.883952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.884027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.884050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.884063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.884073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.884097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.894079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.894159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.894183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.894196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.894215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.432 [2024-11-06 15:43:57.894238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.432 qpair failed and we were unable to recover it. 00:39:30.432 [2024-11-06 15:43:57.904104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.432 [2024-11-06 15:43:57.904186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.432 [2024-11-06 15:43:57.904215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.432 [2024-11-06 15:43:57.904228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.432 [2024-11-06 15:43:57.904239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.904263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.914099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.914175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.914198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.914217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.914229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.914252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.924189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.924273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.924298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.924313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.924324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.924348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.934247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.934321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.934345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.934359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.934370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.934394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.944217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.944289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.944312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.944325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.944336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.944360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.954299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.954402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.954424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.954435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.954444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.954466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.964299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.964402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.964424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.964444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.964453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.964475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.974427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.974504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.974525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.974536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.974545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.974567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.984360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.984457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.984480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.984491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.984501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.984523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:57.994339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:57.994412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:57.994433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:57.994447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:57.994457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:57.994479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:58.004358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:58.004449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:58.004472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:58.004484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:58.004493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:58.004515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:58.014500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:58.014583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:58.014605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:58.014617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:58.014626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:58.014647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.433 [2024-11-06 15:43:58.024435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.433 [2024-11-06 15:43:58.024512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.433 [2024-11-06 15:43:58.024533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.433 [2024-11-06 15:43:58.024545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.433 [2024-11-06 15:43:58.024554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.433 [2024-11-06 15:43:58.024576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.433 qpair failed and we were unable to recover it. 00:39:30.434 [2024-11-06 15:43:58.034468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.434 [2024-11-06 15:43:58.034543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.434 [2024-11-06 15:43:58.034565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.434 [2024-11-06 15:43:58.034577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.434 [2024-11-06 15:43:58.034586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.434 [2024-11-06 15:43:58.034611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.434 qpair failed and we were unable to recover it. 00:39:30.434 [2024-11-06 15:43:58.044467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.434 [2024-11-06 15:43:58.044541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.434 [2024-11-06 15:43:58.044564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.434 [2024-11-06 15:43:58.044575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.434 [2024-11-06 15:43:58.044584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.434 [2024-11-06 15:43:58.044609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.434 qpair failed and we were unable to recover it. 00:39:30.434 [2024-11-06 15:43:58.054662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.434 [2024-11-06 15:43:58.054738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.434 [2024-11-06 15:43:58.054760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.434 [2024-11-06 15:43:58.054772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.434 [2024-11-06 15:43:58.054781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.434 [2024-11-06 15:43:58.054802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.434 qpair failed and we were unable to recover it. 00:39:30.434 [2024-11-06 15:43:58.064523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.434 [2024-11-06 15:43:58.064600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.434 [2024-11-06 15:43:58.064622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.434 [2024-11-06 15:43:58.064633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.434 [2024-11-06 15:43:58.064642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.434 [2024-11-06 15:43:58.064665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.434 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.074583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.074683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.074705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.074717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.074726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.074748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.084679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.084783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.084804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.084816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.084825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.084847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.094632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.094713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.094736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.094748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.094757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.094780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.104675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.104767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.104789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.104800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.104809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.104832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.114716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.114797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.114819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.114830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.114839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.114861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.124785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.124870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.124896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.124907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.124916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.124938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.134861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.134953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.134975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.134987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.134995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.135017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.144837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.144915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.144937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.144948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.144958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.144979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.154747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.154825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.154847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.154860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.154868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.694 [2024-11-06 15:43:58.154890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.694 qpair failed and we were unable to recover it. 00:39:30.694 [2024-11-06 15:43:58.164922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.694 [2024-11-06 15:43:58.165010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.694 [2024-11-06 15:43:58.165031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.694 [2024-11-06 15:43:58.165043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.694 [2024-11-06 15:43:58.165055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.165077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.174909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.174995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.175017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.175029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.175038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.175060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.184909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.184986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.185008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.185019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.185028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.185050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.194893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.195006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.195028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.195039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.195049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.195071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.205026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.205118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.205140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.205152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.205161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.205183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.215021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.215139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.215161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.215172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.215181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.215213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.225010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.225103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.225124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.225135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.225144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.225166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.235054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.235130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.235151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.235163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.235172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.235193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.245138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.245226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.245247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.245259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.245269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.245293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.255230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.255318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.255343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.255356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.255365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.255389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.265226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.265310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.265331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.265343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.265352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.265374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.275196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.275275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.275296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.275308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.275317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.275339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.285382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.285483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.285508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.285520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.285529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.285552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.695 [2024-11-06 15:43:58.295343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.695 [2024-11-06 15:43:58.295422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.695 [2024-11-06 15:43:58.295443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.695 [2024-11-06 15:43:58.295455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.695 [2024-11-06 15:43:58.295468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.695 [2024-11-06 15:43:58.295490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.695 qpair failed and we were unable to recover it. 00:39:30.696 [2024-11-06 15:43:58.305441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.696 [2024-11-06 15:43:58.305565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.696 [2024-11-06 15:43:58.305587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.696 [2024-11-06 15:43:58.305598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.696 [2024-11-06 15:43:58.305607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.696 [2024-11-06 15:43:58.305628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.696 qpair failed and we were unable to recover it. 00:39:30.696 [2024-11-06 15:43:58.315264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.696 [2024-11-06 15:43:58.315345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.696 [2024-11-06 15:43:58.315367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.696 [2024-11-06 15:43:58.315379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.696 [2024-11-06 15:43:58.315387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.696 [2024-11-06 15:43:58.315409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.696 qpair failed and we were unable to recover it. 00:39:30.696 [2024-11-06 15:43:58.325361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.696 [2024-11-06 15:43:58.325431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.696 [2024-11-06 15:43:58.325452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.696 [2024-11-06 15:43:58.325464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.696 [2024-11-06 15:43:58.325473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.696 [2024-11-06 15:43:58.325495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.696 qpair failed and we were unable to recover it. 00:39:30.953 [2024-11-06 15:43:58.335378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.953 [2024-11-06 15:43:58.335463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.953 [2024-11-06 15:43:58.335485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.953 [2024-11-06 15:43:58.335496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.953 [2024-11-06 15:43:58.335506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.335528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.345352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.345437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.345459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.345471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.345480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.345501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.355500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.355594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.355616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.355627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.355636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.355658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.365506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.365611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.365633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.365644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.365654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.365677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.375608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.375697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.375718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.375730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.375739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.375765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.385482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.385558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.385583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.385594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.385603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.385625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.395508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.395588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.395610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.395622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.395631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.395652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.405623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.405726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.405747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.405759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.405768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.405789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.415658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.415740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.415762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.415773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.415782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.415804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.425615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.425687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.425708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.425723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.425732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.425753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.435710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.435806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.435827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.435839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.435847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.435869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.445795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.445873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.445895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.445906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.445916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.445937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.455802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.455879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.455899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.455910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.455919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.954 [2024-11-06 15:43:58.455941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.954 qpair failed and we were unable to recover it. 00:39:30.954 [2024-11-06 15:43:58.465799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.954 [2024-11-06 15:43:58.465903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.954 [2024-11-06 15:43:58.465925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.954 [2024-11-06 15:43:58.465936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.954 [2024-11-06 15:43:58.465945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.465968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.475811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.475923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.475945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.475957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.475971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.475993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.485774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.485853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.485874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.485886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.485895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.485917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.495849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.495949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.495971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.495983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.495991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.496013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.505862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.505937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.505960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.505972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.505981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.506004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.515947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.516035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.516057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.516069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.516078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.516099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.525907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.525992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.526013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.526026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.526034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.526057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.535939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.536020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.536041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.536052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.536061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.536083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.545877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.545956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.545979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.545990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.545999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.546020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.555980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.556064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.556085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.556100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.556109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.556130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.566051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.566125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.566147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.566158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.566168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.566189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.576059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.576144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.576167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.576178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.576188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.576215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:30.955 [2024-11-06 15:43:58.586105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.955 [2024-11-06 15:43:58.586184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.955 [2024-11-06 15:43:58.586215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.955 [2024-11-06 15:43:58.586227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.955 [2024-11-06 15:43:58.586237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:30.955 [2024-11-06 15:43:58.586259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:30.955 qpair failed and we were unable to recover it. 00:39:31.214 [2024-11-06 15:43:58.596205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.214 [2024-11-06 15:43:58.596320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.214 [2024-11-06 15:43:58.596342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.214 [2024-11-06 15:43:58.596354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.214 [2024-11-06 15:43:58.596364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.214 [2024-11-06 15:43:58.596389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.214 qpair failed and we were unable to recover it. 00:39:31.214 [2024-11-06 15:43:58.606131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.606207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.606229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.606241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.606250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.606272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.616217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.616293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.616314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.616326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.616336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.616358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.626155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.626230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.626253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.626266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.626276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.626299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.636266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.636342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.636364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.636376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.636385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.636408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.646304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.646405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.646428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.646440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.646449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.646471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.656358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.656436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.656458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.656469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.656478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.656501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.666278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.666354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.666375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.666387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.666396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.666418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.676270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.676343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.676365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.676377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.676386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.676408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.686430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.686502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.686526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.686538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.686547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.686568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.696473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.696553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.696574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.696585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.696594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.696616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.706445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.706534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.706555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.706567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.706576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.706601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.716494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.716570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.716592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.716604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.716613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.716634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.726542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.726618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.726640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.726651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.726663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.726686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.736568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.736666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.736693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.736705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.736714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.736737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.746471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.746544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.746566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.746578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.746587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.746608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.756644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.756719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.756741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.756752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.756761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.756782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.766538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.766609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.766631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.766643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.766652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.766674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.776598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.776674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.776697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.776708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.776717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.776739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.786678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.786764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.786786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.786798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.786806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.786828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.796739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.796816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.796838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.796849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.796858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.796880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.806626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.806694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.806716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.806728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.806738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.806759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.816832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.816920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.816945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.816956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.816965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.816986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.826759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.826835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.826857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.826870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.826878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.826901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.836825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.836913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.836935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.836947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.836955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.836977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.215 [2024-11-06 15:43:58.846777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.215 [2024-11-06 15:43:58.846872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.215 [2024-11-06 15:43:58.846893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.215 [2024-11-06 15:43:58.846905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.215 [2024-11-06 15:43:58.846913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.215 [2024-11-06 15:43:58.846935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.215 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.856989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.857071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.857092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.857104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.857117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.857138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.866947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.867024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.867046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.867057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.867066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.867088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.876981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.877081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.877103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.877115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.877123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.877145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.886949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.887027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.887049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.887060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.887069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.887090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.896995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.897073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.897094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.897106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.897115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.897137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.907004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.907115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.907136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.907148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.907157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.907178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.916990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.917097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.917119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.917130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.917139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.917160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.927013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.927086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.927108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.927120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.927128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.927150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.937056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.937131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.937152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.937164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.937173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.937194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.947136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.947221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.947246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.947257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.947266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.947287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.957162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.957243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.957265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.957277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.957286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.957307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.967159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.967237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.967259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.967270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.967279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.967301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.977304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.977384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.977406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.977418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.977427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.977449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.987314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.987419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.987441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.987462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.987472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.987499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:58.997441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:58.997537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:58.997561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:58.997572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:58.997582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:58.997606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:59.007288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:59.007362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.475 [2024-11-06 15:43:59.007384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.475 [2024-11-06 15:43:59.007396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.475 [2024-11-06 15:43:59.007405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.475 [2024-11-06 15:43:59.007428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.475 qpair failed and we were unable to recover it. 00:39:31.475 [2024-11-06 15:43:59.017347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.475 [2024-11-06 15:43:59.017442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.017464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.017475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.017484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.017506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.027340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.027421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.027443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.027455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.027463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.027485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.037437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.037554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.037575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.037587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.037595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.037620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.047425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.047544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.047566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.047577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.047586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.047607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.057519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.057594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.057615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.057627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.057636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.057658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.067489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.067568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.067589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.067601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.067610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.067633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.077486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.077598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.077620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.077631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.077640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.077662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.087470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.087569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.087590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.087601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.087610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.087632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.097582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.097677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.097699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.097710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.097719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.097741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.476 [2024-11-06 15:43:59.107624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.476 [2024-11-06 15:43:59.107704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.476 [2024-11-06 15:43:59.107725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.476 [2024-11-06 15:43:59.107737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.476 [2024-11-06 15:43:59.107745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.476 [2024-11-06 15:43:59.107767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.476 qpair failed and we were unable to recover it. 00:39:31.734 [2024-11-06 15:43:59.117655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.734 [2024-11-06 15:43:59.117770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.734 [2024-11-06 15:43:59.117791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.734 [2024-11-06 15:43:59.117806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.734 [2024-11-06 15:43:59.117815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.734 [2024-11-06 15:43:59.117837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.734 qpair failed and we were unable to recover it. 00:39:31.734 [2024-11-06 15:43:59.127700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.734 [2024-11-06 15:43:59.127805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.734 [2024-11-06 15:43:59.127827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.734 [2024-11-06 15:43:59.127838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.734 [2024-11-06 15:43:59.127847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.734 [2024-11-06 15:43:59.127869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.734 qpair failed and we were unable to recover it. 00:39:31.734 [2024-11-06 15:43:59.137669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.734 [2024-11-06 15:43:59.137746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.734 [2024-11-06 15:43:59.137768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.734 [2024-11-06 15:43:59.137780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.734 [2024-11-06 15:43:59.137789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.734 [2024-11-06 15:43:59.137811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.734 qpair failed and we were unable to recover it. 00:39:31.734 [2024-11-06 15:43:59.147638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.734 [2024-11-06 15:43:59.147715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.734 [2024-11-06 15:43:59.147737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.734 [2024-11-06 15:43:59.147748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.734 [2024-11-06 15:43:59.147757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.734 [2024-11-06 15:43:59.147779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.734 qpair failed and we were unable to recover it. 00:39:31.734 [2024-11-06 15:43:59.157701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.734 [2024-11-06 15:43:59.157777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.734 [2024-11-06 15:43:59.157798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.734 [2024-11-06 15:43:59.157810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.734 [2024-11-06 15:43:59.157819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.157843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.167725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.167813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.167835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.167847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.167856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.167878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.177844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.177963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.177984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.177996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.178004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.178026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.187833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.187909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.187931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.187943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.187952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.187974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.197893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.197972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.197993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.198004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.198013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.198035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.207812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.207893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.207915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.207927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.207936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.207958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.217783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.217862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.217885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.217896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.217905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.217928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.227955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.228033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.228055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.228066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.228075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.228096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.237919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.237992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.238014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.238025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.238035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.238057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.247980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.248051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.248076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.248094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.248103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.248125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.258052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.258126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.258147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.258159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.258167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.258189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.267937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.268016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.268039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.268050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.268059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.268080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.277977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.278078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.735 [2024-11-06 15:43:59.278100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.735 [2024-11-06 15:43:59.278111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.735 [2024-11-06 15:43:59.278120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.735 [2024-11-06 15:43:59.278141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.735 qpair failed and we were unable to recover it. 00:39:31.735 [2024-11-06 15:43:59.288114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.735 [2024-11-06 15:43:59.288184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.288211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.288224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.288236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.288258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.298077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.298195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.298222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.298233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.298242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.298264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.308175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.308258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.308280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.308293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.308302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.308324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.318112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.318191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.318218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.318230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.318239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.318262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.328265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.328364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.328385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.328397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.328406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.328428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.338274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.338366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.338387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.338398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.338407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.338429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.348252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.348331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.348353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.348364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.348373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.348395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.358279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.358360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.358381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.358393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.358403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.358424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.736 [2024-11-06 15:43:59.368298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.736 [2024-11-06 15:43:59.368375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.736 [2024-11-06 15:43:59.368396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.736 [2024-11-06 15:43:59.368408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.736 [2024-11-06 15:43:59.368417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.736 [2024-11-06 15:43:59.368441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.736 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.378404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.378482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.378508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.378520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.378529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.378551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.388358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.388438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.388460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.388471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.388480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.388503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.398317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.398394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.398415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.398427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.398436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.398458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.408439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.408565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.408587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.408598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.408607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.408629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.418500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.418577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.418598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.418610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.418622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.418643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.428405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.428512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.428534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.428545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.996 [2024-11-06 15:43:59.428554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.996 [2024-11-06 15:43:59.428576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.996 qpair failed and we were unable to recover it. 00:39:31.996 [2024-11-06 15:43:59.438468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.996 [2024-11-06 15:43:59.438543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.996 [2024-11-06 15:43:59.438565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.996 [2024-11-06 15:43:59.438576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.438585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.438607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.448512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.448593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.448614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.448626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.448635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.448656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.458582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.458666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.458687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.458699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.458708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.458730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.468638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.468756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.468777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.468789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.468798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.468819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.478553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.478638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.478660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.478671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.478680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.478701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.488686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.488760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.488781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.488793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.488802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.488824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.498644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.498722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.498746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.498759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.498768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.498790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.508718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.508812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.508837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.508849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.508858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.508881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.518739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.518837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.518860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.518872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.518881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.518903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.528790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.528862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.528883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.528895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.528904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.528926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.538777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.538858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.538879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.538891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.538900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.538922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.548870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.548965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.548987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.549002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.549011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.549033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.997 [2024-11-06 15:43:59.558796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.997 [2024-11-06 15:43:59.558879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.997 [2024-11-06 15:43:59.558901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.997 [2024-11-06 15:43:59.558913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.997 [2024-11-06 15:43:59.558922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.997 [2024-11-06 15:43:59.558944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.997 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.568839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.568920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.568942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.568955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.568964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.568986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.578880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.578965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.578987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.578999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.579008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.579030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.588859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.588942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.588964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.588975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.588984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.589009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.598986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.599064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.599085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.599097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.599105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.599127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.609084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.609175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.609197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.609215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.609224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.609247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.619060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.619135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.619157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.619168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.619177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.619199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:31.998 [2024-11-06 15:43:59.629104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.998 [2024-11-06 15:43:59.629232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.998 [2024-11-06 15:43:59.629257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.998 [2024-11-06 15:43:59.629269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.998 [2024-11-06 15:43:59.629279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:31.998 [2024-11-06 15:43:59.629302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.998 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.639004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.639083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.639105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.639117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.639126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.639147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.649095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.649169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.649191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.649208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.649217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.649239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.659138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.659222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.659244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.659256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.659265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.659288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.669242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.669328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.669349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.669361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.669369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.669392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.679192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.679292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.679314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.679334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.679343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.679365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.689282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.689355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.689378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.689390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.258 [2024-11-06 15:43:59.689399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.258 [2024-11-06 15:43:59.689421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.258 qpair failed and we were unable to recover it. 00:39:32.258 [2024-11-06 15:43:59.699285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.258 [2024-11-06 15:43:59.699363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.258 [2024-11-06 15:43:59.699386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.258 [2024-11-06 15:43:59.699398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.699407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.699433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.709316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.709399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.709422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.709434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.709443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.709465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.719283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.719405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.719428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.719440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.719450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.719475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.729391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.729494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.729516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.729527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.729537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.729560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.739420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.739519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.739541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.739553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.739562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.739585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.749369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.749495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.749517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.749528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.749537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.749561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.759475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.759548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.759570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.759581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.759596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.759619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.769475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.769549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.769571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.769583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.769592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.769613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.779566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.779644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.779667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.779679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.779687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.779709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.789500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.789579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.789601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.789613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.789622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.789643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.799505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.799633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.799655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.799667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.799676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.799697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.809492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.809580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.809605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.809617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.809626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.809647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.819704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.259 [2024-11-06 15:43:59.819782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.259 [2024-11-06 15:43:59.819803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.259 [2024-11-06 15:43:59.819815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.259 [2024-11-06 15:43:59.819824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.259 [2024-11-06 15:43:59.819846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.259 qpair failed and we were unable to recover it. 00:39:32.259 [2024-11-06 15:43:59.829607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.829693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.829715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.829728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.829737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.829759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.839521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.839606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.839627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.839638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.839647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.839669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.849655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.849745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.849766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.849777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.849789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.849811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.859775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.859851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.859874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.859886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.859895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.859917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.869914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.870009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.870030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.870041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.870050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.870073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.879693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.879763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.879785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.879796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.879805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.879826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.260 [2024-11-06 15:43:59.889801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.260 [2024-11-06 15:43:59.889880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.260 [2024-11-06 15:43:59.889901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.260 [2024-11-06 15:43:59.889913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.260 [2024-11-06 15:43:59.889922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.260 [2024-11-06 15:43:59.889944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.260 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.899856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.899933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.899955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.899967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.899976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.899997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.909825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.909901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.909923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.909935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.909944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.909966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.919847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.919934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.919956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.919968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.919977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.919998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.929932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.930007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.930029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.930040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.930049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.930071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.940001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.940102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.940127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.940138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.940147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.940169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.949982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.950060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.950081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.950093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.950102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.520 [2024-11-06 15:43:59.950124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.520 qpair failed and we were unable to recover it. 00:39:32.520 [2024-11-06 15:43:59.960010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.520 [2024-11-06 15:43:59.960118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.520 [2024-11-06 15:43:59.960140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.520 [2024-11-06 15:43:59.960151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.520 [2024-11-06 15:43:59.960160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:43:59.960182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:43:59.969940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:43:59.970016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:43:59.970038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:43:59.970049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:43:59.970058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:43:59.970080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:43:59.980000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:43:59.980080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:43:59.980101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:43:59.980113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:43:59.980127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:43:59.980149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:43:59.990228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:43:59.990311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:43:59.990333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:43:59.990345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:43:59.990354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:43:59.990376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.000073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.000155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.000176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.000188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.000198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.000228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.010304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.010402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.010427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.010442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.010453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.010479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.020248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.020330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.020363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.020376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.020386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.020410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.030219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.030297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.030320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.030332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.030341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.030368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.040646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.040921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.040954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.040985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.041006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.041048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.050407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.050483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.050507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.050520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.050529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.050553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.060361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.060449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.060471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.060483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.060492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.060514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.521 [2024-11-06 15:44:00.070326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.521 [2024-11-06 15:44:00.070411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.521 [2024-11-06 15:44:00.070437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.521 [2024-11-06 15:44:00.070449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.521 [2024-11-06 15:44:00.070458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.521 [2024-11-06 15:44:00.070481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.521 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.080365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.080470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.080494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.080508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.080517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.080541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.090431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.090512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.090536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.090549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.090558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.090581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.100432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.100507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.100530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.100543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.100552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.100574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.110459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.110537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.110559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.110574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.110583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.110605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.120425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.120509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.120531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.120543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.120552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.120574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.130588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.130680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.130703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.130714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.130724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.130747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.140430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.140527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.140549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.140560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.140569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.140591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.522 [2024-11-06 15:44:00.150524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.522 [2024-11-06 15:44:00.150609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.522 [2024-11-06 15:44:00.150631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.522 [2024-11-06 15:44:00.150643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.522 [2024-11-06 15:44:00.150652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.522 [2024-11-06 15:44:00.150677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.522 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.160708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.160794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.160816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.160827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.160836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.160857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.170649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.170753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.170775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.170786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.170796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.170817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.180607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.180686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.180708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.180720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.180729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.180750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.190705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.190780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.190802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.190814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.190823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.190845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.200687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.200763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.200785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.200797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.200806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.200828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.210605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.210694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.210716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.210728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.210737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.210759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.220737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.220816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.220837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.220848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.220857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.220879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.230723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.230802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.230824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.230836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.230845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.230866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.240811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.240895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.240917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.783 [2024-11-06 15:44:00.240932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.783 [2024-11-06 15:44:00.240940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.783 [2024-11-06 15:44:00.240962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.783 qpair failed and we were unable to recover it. 00:39:32.783 [2024-11-06 15:44:00.250760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.783 [2024-11-06 15:44:00.250835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.783 [2024-11-06 15:44:00.250856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.250868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.250877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.250898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.260862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.260942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.260964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.260975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.260984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.261006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.270846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.270922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.270944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.270957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.270965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.270994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.280908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.280993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.281015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.281027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.281036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.281061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.290866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.290944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.290965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.290977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.290986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.291008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.300995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.301071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.301093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.301104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.301113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.301135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.311041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.311128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.311150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.311161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.311170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.311193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.320993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.321065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.321087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.321099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.321108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.321130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.331052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.331129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.331152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.331164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.331173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.331195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.341118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.341195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.341223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.341234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.341243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.341266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.351075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.351151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.351172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.351184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.351193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.351220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.361133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.784 [2024-11-06 15:44:00.361242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.784 [2024-11-06 15:44:00.361263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.784 [2024-11-06 15:44:00.361275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.784 [2024-11-06 15:44:00.361284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.784 [2024-11-06 15:44:00.361310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.784 qpair failed and we were unable to recover it. 00:39:32.784 [2024-11-06 15:44:00.371112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.785 [2024-11-06 15:44:00.371230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.785 [2024-11-06 15:44:00.371255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.785 [2024-11-06 15:44:00.371268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.785 [2024-11-06 15:44:00.371277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.785 [2024-11-06 15:44:00.371299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.785 qpair failed and we were unable to recover it. 00:39:32.785 [2024-11-06 15:44:00.381165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.785 [2024-11-06 15:44:00.381248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.785 [2024-11-06 15:44:00.381270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.785 [2024-11-06 15:44:00.381282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.785 [2024-11-06 15:44:00.381291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.785 [2024-11-06 15:44:00.381320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.785 qpair failed and we were unable to recover it. 00:39:32.785 [2024-11-06 15:44:00.391227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.785 [2024-11-06 15:44:00.391350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.785 [2024-11-06 15:44:00.391373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.785 [2024-11-06 15:44:00.391385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.785 [2024-11-06 15:44:00.391394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.785 [2024-11-06 15:44:00.391416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.785 qpair failed and we were unable to recover it. 00:39:32.785 [2024-11-06 15:44:00.401265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.785 [2024-11-06 15:44:00.401339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.785 [2024-11-06 15:44:00.401361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.785 [2024-11-06 15:44:00.401373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.785 [2024-11-06 15:44:00.401382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.785 [2024-11-06 15:44:00.401404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.785 qpair failed and we were unable to recover it. 00:39:32.785 [2024-11-06 15:44:00.411270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:32.785 [2024-11-06 15:44:00.411351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:32.785 [2024-11-06 15:44:00.411372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:32.785 [2024-11-06 15:44:00.411384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:32.785 [2024-11-06 15:44:00.411396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:32.785 [2024-11-06 15:44:00.411418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:32.785 qpair failed and we were unable to recover it. 00:39:33.045 [2024-11-06 15:44:00.421317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.421395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.421417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.421429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.421438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.421460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.431452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.431525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.431548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.431560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.431569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.431591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.441388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.441466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.441488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.441500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.441509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.441530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.451373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.451453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.451475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.451486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.451495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.451517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.461482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.461566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.461587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.461599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.461608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.461630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.471503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.471609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.471632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.471643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.471651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.471674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.481461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.481552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.481574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.481586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.481595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.481617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.491513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.491587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.491609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.491621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.491630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.491652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.501600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.501694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.501720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.501732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.501740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.501762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.511580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.511664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.511686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.511697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.511706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.511729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.521658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.521735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.521757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.521768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.521777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.521799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.046 qpair failed and we were unable to recover it. 00:39:33.046 [2024-11-06 15:44:00.531654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.046 [2024-11-06 15:44:00.531736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.046 [2024-11-06 15:44:00.531757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.046 [2024-11-06 15:44:00.531775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.046 [2024-11-06 15:44:00.531783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.046 [2024-11-06 15:44:00.531804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.541672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.541750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.541772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.541784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.541796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.541817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.551795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.551900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.551922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.551933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.551942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.551964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.561702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.561808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.561829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.561841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.561850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.561872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.571747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.571822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.571843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.571855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.571865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.571886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.581798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.581871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.581894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.581905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.581915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.581937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.591808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.591891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.591913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.591925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.591934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.591956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.601807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.601885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.601908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.601919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.601928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.601950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.611867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.611944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.611966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.611977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.611986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.612008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.621946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.622026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.622047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.622059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.622068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.622091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.631989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.632067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.632091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.632103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.632112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.632134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.642018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.642097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.642120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.047 [2024-11-06 15:44:00.642131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.047 [2024-11-06 15:44:00.642140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.047 [2024-11-06 15:44:00.642161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.047 qpair failed and we were unable to recover it. 00:39:33.047 [2024-11-06 15:44:00.651963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.047 [2024-11-06 15:44:00.652036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.047 [2024-11-06 15:44:00.652057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.048 [2024-11-06 15:44:00.652068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.048 [2024-11-06 15:44:00.652077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.048 [2024-11-06 15:44:00.652098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.048 qpair failed and we were unable to recover it. 00:39:33.048 [2024-11-06 15:44:00.662012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.048 [2024-11-06 15:44:00.662092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.048 [2024-11-06 15:44:00.662114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.048 [2024-11-06 15:44:00.662126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.048 [2024-11-06 15:44:00.662135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.048 [2024-11-06 15:44:00.662157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.048 qpair failed and we were unable to recover it. 00:39:33.048 [2024-11-06 15:44:00.672067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.048 [2024-11-06 15:44:00.672145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.048 [2024-11-06 15:44:00.672166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.048 [2024-11-06 15:44:00.672180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.048 [2024-11-06 15:44:00.672189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.048 [2024-11-06 15:44:00.672222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.048 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.682082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.682188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.682215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.682227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.682236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.682258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.692046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.692123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.692145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.692156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.692165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.692190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.702214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.702292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.702314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.702326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.702335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.702357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.712224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.712300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.712323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.712334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.712343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.712368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.722168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.722285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.722306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.722318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.722327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.722348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.732285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.732359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.732381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.732393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.732401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.732423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.742274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.742355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.742377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.742389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.742398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.742421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.752284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.308 [2024-11-06 15:44:00.752362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.308 [2024-11-06 15:44:00.752385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.308 [2024-11-06 15:44:00.752397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.308 [2024-11-06 15:44:00.752405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.308 [2024-11-06 15:44:00.752427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.308 qpair failed and we were unable to recover it. 00:39:33.308 [2024-11-06 15:44:00.762295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.762407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.762429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.762440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.762449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.762471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.772294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.772367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.772388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.772400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.772409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.772431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.782345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.782426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.782448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.782460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.782469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.782492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.792359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.792448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.792470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.792481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.792490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.792512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.802419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.802490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.802512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.802526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.802535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.802557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.812462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.812584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.812606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.812617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.812626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.812648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.822428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.822518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.822541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.822552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.822561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.822583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.832438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.832518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.832540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.832552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.832560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.832582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.842539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.842611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.842634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.842645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.842654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.842678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.852618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.852700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.852722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.852734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.852743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.852764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.862532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.862608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.862629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.862641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.862649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.862671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.872582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.872660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.872682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.872694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.872703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.309 [2024-11-06 15:44:00.872724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.309 qpair failed and we were unable to recover it. 00:39:33.309 [2024-11-06 15:44:00.882662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.309 [2024-11-06 15:44:00.882745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.309 [2024-11-06 15:44:00.882767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.309 [2024-11-06 15:44:00.882779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.309 [2024-11-06 15:44:00.882787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.882810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.892742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.892821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.892842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.892854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.892863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.892884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.902680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.902756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.902778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.902789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.902798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.902819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.912822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.912897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.912919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.912930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.912939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.912961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.922748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.922824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.922848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.922859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.922870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.922892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.932797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.932876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.932900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.932912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.932921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.932943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.310 [2024-11-06 15:44:00.942813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.310 [2024-11-06 15:44:00.942895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.310 [2024-11-06 15:44:00.942917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.310 [2024-11-06 15:44:00.942929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.310 [2024-11-06 15:44:00.942938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.310 [2024-11-06 15:44:00.942959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.310 qpair failed and we were unable to recover it. 00:39:33.569 [2024-11-06 15:44:00.952835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.569 [2024-11-06 15:44:00.952919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.569 [2024-11-06 15:44:00.952942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.569 [2024-11-06 15:44:00.952954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.569 [2024-11-06 15:44:00.952963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.569 [2024-11-06 15:44:00.952985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.569 qpair failed and we were unable to recover it. 00:39:33.569 [2024-11-06 15:44:00.962829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:33.569 [2024-11-06 15:44:00.962906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:33.569 [2024-11-06 15:44:00.962929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:33.569 [2024-11-06 15:44:00.962942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:33.569 [2024-11-06 15:44:00.962952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500032eb80 00:39:33.569 [2024-11-06 15:44:00.962975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:33.569 qpair failed and we were unable to recover it. 00:39:33.569 [2024-11-06 15:44:00.963331] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:39:33.569 A controller has encountered a failure and is being reset. 00:39:33.570 [2024-11-06 15:44:00.963447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e180 (9): Bad file descriptor 00:39:33.570 Controller properly reset. 00:39:33.570 Initializing NVMe Controllers 00:39:33.570 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:33.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:33.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:33.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:33.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:33.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:33.570 Initialization complete. Launching workers. 00:39:33.570 Starting thread on core 1 00:39:33.570 Starting thread on core 2 00:39:33.570 Starting thread on core 0 00:39:33.570 Starting thread on core 3 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:33.570 00:39:33.570 real 0m11.524s 00:39:33.570 user 0m21.699s 00:39:33.570 sys 0m4.564s 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:33.570 ************************************ 00:39:33.570 END TEST nvmf_target_disconnect_tc2 00:39:33.570 ************************************ 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.570 rmmod nvme_tcp 00:39:33.570 rmmod nvme_fabrics 00:39:33.570 rmmod nvme_keyring 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4110361 ']' 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4110361 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 4110361 ']' 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 4110361 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:33.570 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4110361 00:39:33.829 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:39:33.829 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:39:33.829 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4110361' 00:39:33.829 killing process with pid 4110361 00:39:33.829 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 4110361 00:39:33.829 15:44:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 4110361 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.209 15:44:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.114 00:39:37.114 real 0m21.469s 00:39:37.114 user 0m52.424s 00:39:37.114 sys 0m9.689s 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:37.114 ************************************ 00:39:37.114 END TEST nvmf_target_disconnect 00:39:37.114 ************************************ 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:37.114 00:39:37.114 real 8m28.668s 00:39:37.114 user 19m49.646s 00:39:37.114 sys 2m14.755s 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:37.114 15:44:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.114 ************************************ 00:39:37.114 END TEST nvmf_host 00:39:37.114 ************************************ 00:39:37.114 15:44:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:39:37.114 15:44:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:39:37.114 15:44:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:37.114 15:44:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:37.114 15:44:04 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:37.114 15:44:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.114 ************************************ 00:39:37.114 START TEST nvmf_target_core_interrupt_mode 00:39:37.114 ************************************ 00:39:37.114 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:37.374 * Looking for test storage... 00:39:37.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:39:37.374 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.375 --rc genhtml_branch_coverage=1 00:39:37.375 --rc genhtml_function_coverage=1 00:39:37.375 --rc genhtml_legend=1 00:39:37.375 --rc geninfo_all_blocks=1 00:39:37.375 --rc geninfo_unexecuted_blocks=1 00:39:37.375 00:39:37.375 ' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.375 --rc genhtml_branch_coverage=1 00:39:37.375 --rc genhtml_function_coverage=1 00:39:37.375 --rc genhtml_legend=1 00:39:37.375 --rc geninfo_all_blocks=1 00:39:37.375 --rc geninfo_unexecuted_blocks=1 00:39:37.375 00:39:37.375 ' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.375 --rc genhtml_branch_coverage=1 00:39:37.375 --rc genhtml_function_coverage=1 00:39:37.375 --rc genhtml_legend=1 00:39:37.375 --rc geninfo_all_blocks=1 00:39:37.375 --rc geninfo_unexecuted_blocks=1 00:39:37.375 00:39:37.375 ' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:37.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.375 --rc genhtml_branch_coverage=1 00:39:37.375 --rc genhtml_function_coverage=1 00:39:37.375 --rc genhtml_legend=1 00:39:37.375 --rc geninfo_all_blocks=1 00:39:37.375 --rc geninfo_unexecuted_blocks=1 00:39:37.375 00:39:37.375 ' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.375 ************************************ 00:39:37.375 START TEST nvmf_abort 00:39:37.375 ************************************ 00:39:37.375 15:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:37.636 * Looking for test storage... 00:39:37.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.636 --rc genhtml_branch_coverage=1 00:39:37.636 --rc genhtml_function_coverage=1 00:39:37.636 --rc genhtml_legend=1 00:39:37.636 --rc geninfo_all_blocks=1 00:39:37.636 --rc geninfo_unexecuted_blocks=1 00:39:37.636 00:39:37.636 ' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.636 --rc genhtml_branch_coverage=1 00:39:37.636 --rc genhtml_function_coverage=1 00:39:37.636 --rc genhtml_legend=1 00:39:37.636 --rc geninfo_all_blocks=1 00:39:37.636 --rc geninfo_unexecuted_blocks=1 00:39:37.636 00:39:37.636 ' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.636 --rc genhtml_branch_coverage=1 00:39:37.636 --rc genhtml_function_coverage=1 00:39:37.636 --rc genhtml_legend=1 00:39:37.636 --rc geninfo_all_blocks=1 00:39:37.636 --rc geninfo_unexecuted_blocks=1 00:39:37.636 00:39:37.636 ' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:37.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.636 --rc genhtml_branch_coverage=1 00:39:37.636 --rc genhtml_function_coverage=1 00:39:37.636 --rc genhtml_legend=1 00:39:37.636 --rc geninfo_all_blocks=1 00:39:37.636 --rc geninfo_unexecuted_blocks=1 00:39:37.636 00:39:37.636 ' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.636 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.637 15:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:44.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:44.207 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:44.207 Found net devices under 0000:86:00.0: cvl_0_0 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:44.207 Found net devices under 0000:86:00.1: cvl_0_1 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:44.207 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:44.208 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:44.208 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:44.208 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:44.208 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:44.208 15:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:44.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:39:44.208 00:39:44.208 --- 10.0.0.2 ping statistics --- 00:39:44.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.208 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:44.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:39:44.208 00:39:44.208 --- 10.0.0.1 ping statistics --- 00:39:44.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.208 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4115136 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4115136 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 4115136 ']' 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:44.208 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.208 [2024-11-06 15:44:11.187457] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:44.208 [2024-11-06 15:44:11.189594] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:44.208 [2024-11-06 15:44:11.189667] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.208 [2024-11-06 15:44:11.323665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:44.208 [2024-11-06 15:44:11.429101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.208 [2024-11-06 15:44:11.429144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.208 [2024-11-06 15:44:11.429156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.208 [2024-11-06 15:44:11.429166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.208 [2024-11-06 15:44:11.429175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.208 [2024-11-06 15:44:11.431522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:44.208 [2024-11-06 15:44:11.431584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.208 [2024-11-06 15:44:11.431608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:44.208 [2024-11-06 15:44:11.735053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:44.208 [2024-11-06 15:44:11.735984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:44.208 [2024-11-06 15:44:11.736143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:44.208 [2024-11-06 15:44:11.736377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:44.468 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:44.468 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:39:44.468 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:44.468 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:44.468 15:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.468 [2024-11-06 15:44:12.040797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.468 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.727 Malloc0 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.727 Delay0 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.727 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.728 [2024-11-06 15:44:12.192565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:44.728 15:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:44.728 [2024-11-06 15:44:12.320036] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:47.261 Initializing NVMe Controllers 00:39:47.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:47.261 controller IO queue size 128 less than required 00:39:47.261 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:47.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:47.261 Initialization complete. Launching workers. 00:39:47.261 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34380 00:39:47.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34437, failed to submit 66 00:39:47.261 success 34380, unsuccessful 57, failed 0 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:47.261 rmmod nvme_tcp 00:39:47.261 rmmod nvme_fabrics 00:39:47.261 rmmod nvme_keyring 00:39:47.261 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4115136 ']' 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4115136 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 4115136 ']' 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 4115136 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4115136 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4115136' 00:39:47.262 killing process with pid 4115136 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 4115136 00:39:47.262 15:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 4115136 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.200 15:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.735 00:39:50.735 real 0m12.890s 00:39:50.735 user 0m12.297s 00:39:50.735 sys 0m5.767s 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.735 ************************************ 00:39:50.735 END TEST nvmf_abort 00:39:50.735 ************************************ 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:50.735 ************************************ 00:39:50.735 START TEST nvmf_ns_hotplug_stress 00:39:50.735 ************************************ 00:39:50.735 15:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:50.735 * Looking for test storage... 00:39:50.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.735 --rc genhtml_branch_coverage=1 00:39:50.735 --rc genhtml_function_coverage=1 00:39:50.735 --rc genhtml_legend=1 00:39:50.735 --rc geninfo_all_blocks=1 00:39:50.735 --rc geninfo_unexecuted_blocks=1 00:39:50.735 00:39:50.735 ' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.735 --rc genhtml_branch_coverage=1 00:39:50.735 --rc genhtml_function_coverage=1 00:39:50.735 --rc genhtml_legend=1 00:39:50.735 --rc geninfo_all_blocks=1 00:39:50.735 --rc geninfo_unexecuted_blocks=1 00:39:50.735 00:39:50.735 ' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.735 --rc genhtml_branch_coverage=1 00:39:50.735 --rc genhtml_function_coverage=1 00:39:50.735 --rc genhtml_legend=1 00:39:50.735 --rc geninfo_all_blocks=1 00:39:50.735 --rc geninfo_unexecuted_blocks=1 00:39:50.735 00:39:50.735 ' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:50.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.735 --rc genhtml_branch_coverage=1 00:39:50.735 --rc genhtml_function_coverage=1 00:39:50.735 --rc genhtml_legend=1 00:39:50.735 --rc geninfo_all_blocks=1 00:39:50.735 --rc geninfo_unexecuted_blocks=1 00:39:50.735 00:39:50.735 ' 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.735 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:50.736 15:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.134 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:56.135 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:56.135 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:56.135 Found net devices under 0000:86:00.0: cvl_0_0 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:56.135 Found net devices under 0000:86:00.1: cvl_0_1 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.135 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:39:56.394 00:39:56.394 --- 10.0.0.2 ping statistics --- 00:39:56.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.394 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:39:56.394 00:39:56.394 --- 10.0.0.1 ping statistics --- 00:39:56.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.394 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4119350 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4119350 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 4119350 ']' 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:56.394 15:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.653 [2024-11-06 15:44:24.055594] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.653 [2024-11-06 15:44:24.057638] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:39:56.653 [2024-11-06 15:44:24.057705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.653 [2024-11-06 15:44:24.185454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:56.911 [2024-11-06 15:44:24.290930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.911 [2024-11-06 15:44:24.290971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.912 [2024-11-06 15:44:24.290984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.912 [2024-11-06 15:44:24.290993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.912 [2024-11-06 15:44:24.291002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.912 [2024-11-06 15:44:24.293333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:56.912 [2024-11-06 15:44:24.293396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.912 [2024-11-06 15:44:24.293419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:57.170 [2024-11-06 15:44:24.612033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.170 [2024-11-06 15:44:24.613036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.170 [2024-11-06 15:44:24.613311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.170 [2024-11-06 15:44:24.613540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:57.430 15:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:57.430 [2024-11-06 15:44:25.062510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.689 15:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:57.689 15:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.948 [2024-11-06 15:44:25.475009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.948 15:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:58.207 15:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:58.466 Malloc0 00:39:58.466 15:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:58.466 Delay0 00:39:58.466 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:58.724 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:58.983 NULL1 00:39:58.983 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:59.242 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4119836 00:39:59.242 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:39:59.242 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:59.242 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:59.242 15:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:59.500 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:59.500 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:59.758 true 00:39:59.758 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:39:59.758 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.016 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.274 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:40:00.274 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:40:00.274 true 00:40:00.274 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:00.274 15:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.533 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.791 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:40:00.791 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:40:01.049 true 00:40:01.049 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:01.049 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.307 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:01.565 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:40:01.565 15:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:40:01.565 true 00:40:01.565 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:01.566 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.824 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:02.082 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:40:02.082 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:40:02.340 true 00:40:02.340 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:02.340 15:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.598 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:02.856 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:40:02.856 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:40:02.856 true 00:40:02.856 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:02.856 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.115 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.374 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:40:03.375 15:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:40:03.634 true 00:40:03.634 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:03.634 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.893 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.151 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:40:04.151 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:40:04.151 true 00:40:04.151 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:04.151 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.410 15:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.668 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:40:04.668 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:40:04.926 true 00:40:04.926 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:04.926 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.185 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.444 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:40:05.444 15:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:40:05.444 true 00:40:05.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:05.444 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.702 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.961 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:40:05.961 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:40:06.221 true 00:40:06.221 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:06.221 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.480 15:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.739 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:40:06.739 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:40:06.739 true 00:40:06.739 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:06.739 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.998 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.256 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:40:07.256 15:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:40:07.515 true 00:40:07.515 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:07.515 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.774 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.033 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:40:08.033 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:40:08.033 true 00:40:08.033 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:08.033 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.292 15:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.551 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:40:08.551 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:40:08.810 true 00:40:08.810 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:08.810 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.069 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.328 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:40:09.328 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:40:09.328 true 00:40:09.328 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:09.328 15:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.587 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.846 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:40:09.846 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:40:10.104 true 00:40:10.104 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:10.104 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.363 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:10.363 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:40:10.363 15:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:40:10.622 true 00:40:10.622 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:10.622 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.881 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.141 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:40:11.141 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:40:11.399 true 00:40:11.399 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:11.399 15:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.659 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.659 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:40:11.659 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:40:11.917 true 00:40:11.917 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:11.917 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.175 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.434 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:40:12.434 15:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:40:12.693 true 00:40:12.693 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:12.693 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.952 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.952 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:40:12.952 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:40:13.211 true 00:40:13.211 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:13.211 15:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.470 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:13.729 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:40:13.729 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:40:13.988 true 00:40:13.988 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:13.988 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.247 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.247 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:40:14.247 15:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:40:14.506 true 00:40:14.506 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:14.506 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.764 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:15.022 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:40:15.022 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:40:15.281 true 00:40:15.281 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:15.281 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.540 15:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:15.540 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:40:15.540 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:40:15.798 true 00:40:15.798 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:15.798 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.057 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.315 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:16.315 15:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:16.574 true 00:40:16.574 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:16.575 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.833 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.833 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:16.833 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:17.092 true 00:40:17.092 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:17.092 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.350 15:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.609 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:17.609 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:17.867 true 00:40:17.868 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:17.868 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.126 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:18.384 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:18.384 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:18.384 true 00:40:18.384 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:18.384 15:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.643 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:18.902 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:18.902 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:19.160 true 00:40:19.160 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:19.161 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.419 15:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:19.678 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:19.678 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:19.678 true 00:40:19.678 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:19.678 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.937 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:20.196 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:20.196 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:20.454 true 00:40:20.454 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:20.454 15:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:20.713 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:20.971 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:20.971 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:20.971 true 00:40:20.971 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:20.971 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:21.230 15:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:21.489 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:21.489 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:21.747 true 00:40:21.747 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:21.747 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.006 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.265 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:22.265 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:22.265 true 00:40:22.265 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:22.265 15:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.523 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.782 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:40:22.782 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:40:23.040 true 00:40:23.040 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:23.040 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.300 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:23.559 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:40:23.559 15:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:40:23.559 true 00:40:23.559 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:23.559 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.818 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.077 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:40:24.077 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:40:24.336 true 00:40:24.336 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:24.336 15:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.595 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.854 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:40:24.854 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:40:24.854 true 00:40:24.854 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:24.854 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.113 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:25.372 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:40:25.372 15:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:40:25.630 true 00:40:25.630 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:25.630 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.889 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.148 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:40:26.148 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:40:26.148 true 00:40:26.148 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:26.148 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:26.407 15:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.665 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:40:26.665 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:40:26.923 true 00:40:26.923 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:26.923 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:27.182 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.441 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:40:27.441 15:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:40:27.441 true 00:40:27.441 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:27.441 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:27.699 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.958 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:40:27.958 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:40:28.217 true 00:40:28.217 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:28.217 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:28.479 15:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:28.737 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:40:28.738 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:40:29.003 true 00:40:29.003 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:29.003 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:29.292 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:29.292 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:40:29.292 15:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:40:29.587 Initializing NVMe Controllers 00:40:29.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:29.587 Controller IO queue size 128, less than required. 00:40:29.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:29.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:29.587 Initialization complete. Launching workers. 00:40:29.587 ======================================================== 00:40:29.587 Latency(us) 00:40:29.587 Device Information : IOPS MiB/s Average min max 00:40:29.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24117.23 11.78 5307.41 1405.67 10986.48 00:40:29.587 ======================================================== 00:40:29.587 Total : 24117.23 11.78 5307.41 1405.67 10986.48 00:40:29.587 00:40:29.587 true 00:40:29.587 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4119836 00:40:29.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4119836) - No such process 00:40:29.587 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4119836 00:40:29.587 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:29.857 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:29.858 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:40:29.858 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:40:29.858 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:40:29.858 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.858 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:40:30.116 null0 00:40:30.116 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.117 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.117 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:40:30.375 null1 00:40:30.375 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.375 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.376 15:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:40:30.634 null2 00:40:30.634 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.634 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.635 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:40:30.635 null3 00:40:30.635 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.635 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.635 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:40:30.893 null4 00:40:30.893 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.893 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.893 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:40:31.152 null5 00:40:31.152 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:31.152 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:31.152 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:40:31.152 null6 00:40:31.412 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:31.412 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:31.412 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:40:31.412 null7 00:40:31.412 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:31.412 15:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.412 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4125003 4125004 4125006 4125008 4125010 4125012 4125014 4125016 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.413 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:31.673 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:31.932 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:32.191 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.449 15:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:32.449 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.708 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:32.968 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.227 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.228 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:33.486 15:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.486 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.487 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:33.745 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:33.746 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.005 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:34.263 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.264 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.522 15:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:34.522 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:34.782 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:35.040 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:35.041 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:35.041 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:35.300 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:35.558 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:35.558 15:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.558 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:35.559 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:35.559 rmmod nvme_tcp 00:40:35.816 rmmod nvme_fabrics 00:40:35.816 rmmod nvme_keyring 00:40:35.816 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4119350 ']' 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4119350 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 4119350 ']' 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 4119350 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4119350 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4119350' 00:40:35.817 killing process with pid 4119350 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 4119350 00:40:35.817 15:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 4119350 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.192 15:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:39.097 00:40:39.097 real 0m48.619s 00:40:39.097 user 3m2.965s 00:40:39.097 sys 0m21.554s 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:39.097 ************************************ 00:40:39.097 END TEST nvmf_ns_hotplug_stress 00:40:39.097 ************************************ 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:39.097 ************************************ 00:40:39.097 START TEST nvmf_delete_subsystem 00:40:39.097 ************************************ 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:39.097 * Looking for test storage... 00:40:39.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:40:39.097 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:39.356 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:39.356 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:39.356 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:39.356 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:39.356 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:39.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.357 --rc genhtml_branch_coverage=1 00:40:39.357 --rc genhtml_function_coverage=1 00:40:39.357 --rc genhtml_legend=1 00:40:39.357 --rc geninfo_all_blocks=1 00:40:39.357 --rc geninfo_unexecuted_blocks=1 00:40:39.357 00:40:39.357 ' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:39.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.357 --rc genhtml_branch_coverage=1 00:40:39.357 --rc genhtml_function_coverage=1 00:40:39.357 --rc genhtml_legend=1 00:40:39.357 --rc geninfo_all_blocks=1 00:40:39.357 --rc geninfo_unexecuted_blocks=1 00:40:39.357 00:40:39.357 ' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:39.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.357 --rc genhtml_branch_coverage=1 00:40:39.357 --rc genhtml_function_coverage=1 00:40:39.357 --rc genhtml_legend=1 00:40:39.357 --rc geninfo_all_blocks=1 00:40:39.357 --rc geninfo_unexecuted_blocks=1 00:40:39.357 00:40:39.357 ' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:39.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:39.357 --rc genhtml_branch_coverage=1 00:40:39.357 --rc genhtml_function_coverage=1 00:40:39.357 --rc genhtml_legend=1 00:40:39.357 --rc geninfo_all_blocks=1 00:40:39.357 --rc geninfo_unexecuted_blocks=1 00:40:39.357 00:40:39.357 ' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:39.357 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:39.358 15:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:45.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:45.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:45.930 Found net devices under 0000:86:00.0: cvl_0_0 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:45.930 Found net devices under 0000:86:00.1: cvl_0_1 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:45.930 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:45.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:45.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:40:45.931 00:40:45.931 --- 10.0.0.2 ping statistics --- 00:40:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.931 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:45.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:45.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:40:45.931 00:40:45.931 --- 10.0.0.1 ping statistics --- 00:40:45.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.931 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4129929 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4129929 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 4129929 ']' 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:45.931 15:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.931 [2024-11-06 15:45:12.782560] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:45.931 [2024-11-06 15:45:12.784850] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:40:45.931 [2024-11-06 15:45:12.784922] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.931 [2024-11-06 15:45:12.919577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:45.931 [2024-11-06 15:45:13.026035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.931 [2024-11-06 15:45:13.026078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.931 [2024-11-06 15:45:13.026090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.931 [2024-11-06 15:45:13.026100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.931 [2024-11-06 15:45:13.026113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.931 [2024-11-06 15:45:13.028397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.931 [2024-11-06 15:45:13.028422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.931 [2024-11-06 15:45:13.329396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:45.931 [2024-11-06 15:45:13.329442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:45.931 [2024-11-06 15:45:13.329709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 [2024-11-06 15:45:13.613437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 [2024-11-06 15:45:13.641793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 NULL1 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 Delay0 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4130132 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:46.191 15:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:46.191 [2024-11-06 15:45:13.791861] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:48.095 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:48.095 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.095 15:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 [2024-11-06 15:45:15.932926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Write completed with error (sct=0, sc=8) 00:40:48.354 Read completed with error (sct=0, sc=8) 00:40:48.354 starting I/O failed: -6 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 [2024-11-06 15:45:15.933718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 [2024-11-06 15:45:15.934423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020880 is same with the state(6) to be set 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 Write completed with error (sct=0, sc=8) 00:40:48.355 Read completed with error (sct=0, sc=8) 00:40:48.355 [2024-11-06 15:45:15.935158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:40:49.293 [2024-11-06 15:45:16.902674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 [2024-11-06 15:45:16.936394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 [2024-11-06 15:45:16.937542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 [2024-11-06 15:45:16.938473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Read completed with error (sct=0, sc=8) 00:40:49.553 Write completed with error (sct=0, sc=8) 00:40:49.553 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.553 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:49.553 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4130132 00:40:49.553 15:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:49.553 [2024-11-06 15:45:16.943814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020d80 is same with the state(6) to be set 00:40:49.553 Initializing NVMe Controllers 00:40:49.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:49.553 Controller IO queue size 128, less than required. 00:40:49.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:49.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:49.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:49.553 Initialization complete. Launching workers. 00:40:49.553 ======================================================== 00:40:49.553 Latency(us) 00:40:49.553 Device Information : IOPS MiB/s Average min max 00:40:49.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.09 0.09 950036.85 617.44 1013413.05 00:40:49.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.82 0.08 867617.26 630.00 1013650.31 00:40:49.554 ======================================================== 00:40:49.554 Total : 346.91 0.17 912541.24 617.44 1013650.31 00:40:49.554 00:40:49.554 [2024-11-06 15:45:16.949090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e300 (9): Bad file descriptor 00:40:49.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4130132 00:40:49.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4130132) - No such process 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4130132 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4130132 00:40:49.812 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 4130132 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:50.071 [2024-11-06 15:45:17.473844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.071 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4130817 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:50.072 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:50.072 [2024-11-06 15:45:17.594927] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:50.640 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:50.640 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:50.640 15:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:50.899 15:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:50.899 15:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:50.899 15:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:51.468 15:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:51.468 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:51.468 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:52.035 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:52.035 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:52.035 15:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:52.601 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:52.601 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:52.601 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:53.169 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:53.169 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:53.169 15:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:53.169 Initializing NVMe Controllers 00:40:53.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:53.169 Controller IO queue size 128, less than required. 00:40:53.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:53.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:53.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:53.169 Initialization complete. Launching workers. 00:40:53.169 ======================================================== 00:40:53.169 Latency(us) 00:40:53.169 Device Information : IOPS MiB/s Average min max 00:40:53.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003523.70 1000164.76 1010626.22 00:40:53.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005114.95 1000293.59 1012329.46 00:40:53.169 ======================================================== 00:40:53.169 Total : 256.00 0.12 1004319.33 1000164.76 1012329.46 00:40:53.169 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4130817 00:40:53.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4130817) - No such process 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4130817 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:53.428 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:53.428 rmmod nvme_tcp 00:40:53.428 rmmod nvme_fabrics 00:40:53.428 rmmod nvme_keyring 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4129929 ']' 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4129929 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 4129929 ']' 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 4129929 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4129929 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4129929' 00:40:53.687 killing process with pid 4129929 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 4129929 00:40:53.687 15:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 4129929 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:54.624 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:40:54.625 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:54.625 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:54.625 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:54.625 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:54.625 15:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:57.160 00:40:57.160 real 0m17.683s 00:40:57.160 user 0m27.477s 00:40:57.160 sys 0m6.354s 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:57.160 ************************************ 00:40:57.160 END TEST nvmf_delete_subsystem 00:40:57.160 ************************************ 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:57.160 ************************************ 00:40:57.160 START TEST nvmf_host_management 00:40:57.160 ************************************ 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:57.160 * Looking for test storage... 00:40:57.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:57.160 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:57.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.161 --rc genhtml_branch_coverage=1 00:40:57.161 --rc genhtml_function_coverage=1 00:40:57.161 --rc genhtml_legend=1 00:40:57.161 --rc geninfo_all_blocks=1 00:40:57.161 --rc geninfo_unexecuted_blocks=1 00:40:57.161 00:40:57.161 ' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:57.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.161 --rc genhtml_branch_coverage=1 00:40:57.161 --rc genhtml_function_coverage=1 00:40:57.161 --rc genhtml_legend=1 00:40:57.161 --rc geninfo_all_blocks=1 00:40:57.161 --rc geninfo_unexecuted_blocks=1 00:40:57.161 00:40:57.161 ' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:57.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.161 --rc genhtml_branch_coverage=1 00:40:57.161 --rc genhtml_function_coverage=1 00:40:57.161 --rc genhtml_legend=1 00:40:57.161 --rc geninfo_all_blocks=1 00:40:57.161 --rc geninfo_unexecuted_blocks=1 00:40:57.161 00:40:57.161 ' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:57.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.161 --rc genhtml_branch_coverage=1 00:40:57.161 --rc genhtml_function_coverage=1 00:40:57.161 --rc genhtml_legend=1 00:40:57.161 --rc geninfo_all_blocks=1 00:40:57.161 --rc geninfo_unexecuted_blocks=1 00:40:57.161 00:40:57.161 ' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:57.161 15:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:03.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:03.731 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:03.731 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:03.732 Found net devices under 0000:86:00.0: cvl_0_0 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:03.732 Found net devices under 0000:86:00.1: cvl_0_1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:03.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:03.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:41:03.732 00:41:03.732 --- 10.0.0.2 ping statistics --- 00:41:03.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:03.732 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:03.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:03.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:41:03.732 00:41:03.732 --- 10.0.0.1 ping statistics --- 00:41:03.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:03.732 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4134874 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4134874 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4134874 ']' 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:03.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:03.732 15:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.732 [2024-11-06 15:45:30.534159] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:03.732 [2024-11-06 15:45:30.536269] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:03.732 [2024-11-06 15:45:30.536356] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:03.732 [2024-11-06 15:45:30.668233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:03.732 [2024-11-06 15:45:30.780889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:03.732 [2024-11-06 15:45:30.780934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:03.732 [2024-11-06 15:45:30.780947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:03.732 [2024-11-06 15:45:30.780957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:03.733 [2024-11-06 15:45:30.780967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:03.733 [2024-11-06 15:45:30.783439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:03.733 [2024-11-06 15:45:30.783536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:03.733 [2024-11-06 15:45:30.783540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:03.733 [2024-11-06 15:45:30.783562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:03.733 [2024-11-06 15:45:31.071044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:03.733 [2024-11-06 15:45:31.077906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:03.733 [2024-11-06 15:45:31.078153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:03.733 [2024-11-06 15:45:31.080622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:03.733 [2024-11-06 15:45:31.081305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:03.733 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:03.733 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:41:03.733 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:03.733 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:03.733 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 [2024-11-06 15:45:31.388829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 Malloc0 00:41:03.992 [2024-11-06 15:45:31.532751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4135088 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4135088 /var/tmp/bdevperf.sock 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 4135088 ']' 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:03.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:03.992 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:03.992 { 00:41:03.992 "params": { 00:41:03.992 "name": "Nvme$subsystem", 00:41:03.993 "trtype": "$TEST_TRANSPORT", 00:41:03.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:03.993 "adrfam": "ipv4", 00:41:03.993 "trsvcid": "$NVMF_PORT", 00:41:03.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:03.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:03.993 "hdgst": ${hdgst:-false}, 00:41:03.993 "ddgst": ${ddgst:-false} 00:41:03.993 }, 00:41:03.993 "method": "bdev_nvme_attach_controller" 00:41:03.993 } 00:41:03.993 EOF 00:41:03.993 )") 00:41:03.993 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:03.993 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:03.993 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:03.993 15:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:03.993 "params": { 00:41:03.993 "name": "Nvme0", 00:41:03.993 "trtype": "tcp", 00:41:03.993 "traddr": "10.0.0.2", 00:41:03.993 "adrfam": "ipv4", 00:41:03.993 "trsvcid": "4420", 00:41:03.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:03.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:03.993 "hdgst": false, 00:41:03.993 "ddgst": false 00:41:03.993 }, 00:41:03.993 "method": "bdev_nvme_attach_controller" 00:41:03.993 }' 00:41:04.252 [2024-11-06 15:45:31.653985] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:04.252 [2024-11-06 15:45:31.654072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135088 ] 00:41:04.252 [2024-11-06 15:45:31.779659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.510 [2024-11-06 15:45:31.890268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.769 Running I/O for 10 seconds... 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.031 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.031 [2024-11-06 15:45:32.528461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.031 [2024-11-06 15:45:32.528971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.528980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.528988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.528998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.529047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:41:05.032 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.032 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:05.032 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.032 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.032 [2024-11-06 15:45:32.536426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.032 [2024-11-06 15:45:32.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.536482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.032 [2024-11-06 15:45:32.536494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.536504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.032 [2024-11-06 15:45:32.536514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.536525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.032 [2024-11-06 15:45:32.536534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.536543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:41:05.032 [2024-11-06 15:45:32.538967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.538999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.032 [2024-11-06 15:45:32.539633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.032 [2024-11-06 15:45:32.539644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.539981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.540349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.033 [2024-11-06 15:45:32.540358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.033 [2024-11-06 15:45:32.541677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:41:05.033 task offset: 29184 on job bdev=Nvme0n1 fails 00:41:05.033 00:41:05.033 Latency(us) 00:41:05.033 [2024-11-06T14:45:32.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.033 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:05.033 Job: Nvme0n1 ended in about 0.15 seconds with error 00:41:05.033 Verification LBA range: start 0x0 length 0x400 00:41:05.033 Nvme0n1 : 0.15 1485.23 92.83 416.91 0.00 31601.00 2075.31 31082.79 00:41:05.033 [2024-11-06T14:45:32.671Z] =================================================================================================================== 00:41:05.033 [2024-11-06T14:45:32.671Z] Total : 1485.23 92.83 416.91 0.00 31601.00 2075.31 31082.79 00:41:05.033 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.033 15:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:05.033 [2024-11-06 15:45:32.557357] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:05.033 [2024-11-06 15:45:32.557398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032dc80 (9): Bad file descriptor 00:41:05.033 [2024-11-06 15:45:32.562028] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4135088 00:41:05.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4135088) - No such process 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:05.970 { 00:41:05.970 "params": { 00:41:05.970 "name": "Nvme$subsystem", 00:41:05.970 "trtype": "$TEST_TRANSPORT", 00:41:05.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:05.970 "adrfam": "ipv4", 00:41:05.970 "trsvcid": "$NVMF_PORT", 00:41:05.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:05.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:05.970 "hdgst": ${hdgst:-false}, 00:41:05.970 "ddgst": ${ddgst:-false} 00:41:05.970 }, 00:41:05.970 "method": "bdev_nvme_attach_controller" 00:41:05.970 } 00:41:05.970 EOF 00:41:05.970 )") 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:41:05.970 15:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:05.970 "params": { 00:41:05.970 "name": "Nvme0", 00:41:05.970 "trtype": "tcp", 00:41:05.970 "traddr": "10.0.0.2", 00:41:05.970 "adrfam": "ipv4", 00:41:05.970 "trsvcid": "4420", 00:41:05.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:05.970 "hdgst": false, 00:41:05.970 "ddgst": false 00:41:05.970 }, 00:41:05.970 "method": "bdev_nvme_attach_controller" 00:41:05.970 }' 00:41:06.229 [2024-11-06 15:45:33.628792] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:06.229 [2024-11-06 15:45:33.628880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135492 ] 00:41:06.229 [2024-11-06 15:45:33.755306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.229 [2024-11-06 15:45:33.859623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.797 Running I/O for 1 seconds... 00:41:08.174 1856.00 IOPS, 116.00 MiB/s 00:41:08.174 Latency(us) 00:41:08.174 [2024-11-06T14:45:35.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:08.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:08.174 Verification LBA range: start 0x0 length 0x400 00:41:08.174 Nvme0n1 : 1.03 1868.49 116.78 0.00 0.00 33694.96 7084.13 30333.81 00:41:08.174 [2024-11-06T14:45:35.812Z] =================================================================================================================== 00:41:08.174 [2024-11-06T14:45:35.812Z] Total : 1868.49 116.78 0.00 0.00 33694.96 7084.13 30333.81 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:08.742 rmmod nvme_tcp 00:41:08.742 rmmod nvme_fabrics 00:41:08.742 rmmod nvme_keyring 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4134874 ']' 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4134874 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 4134874 ']' 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 4134874 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:08.742 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4134874 00:41:09.001 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:41:09.001 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:41:09.001 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4134874' 00:41:09.001 killing process with pid 4134874 00:41:09.001 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 4134874 00:41:09.001 15:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 4134874 00:41:10.380 [2024-11-06 15:45:37.608825] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.380 15:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:12.285 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:12.286 00:41:12.286 real 0m15.378s 00:41:12.286 user 0m26.502s 00:41:12.286 sys 0m6.866s 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:12.286 ************************************ 00:41:12.286 END TEST nvmf_host_management 00:41:12.286 ************************************ 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:12.286 ************************************ 00:41:12.286 START TEST nvmf_lvol 00:41:12.286 ************************************ 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:12.286 * Looking for test storage... 00:41:12.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:41:12.286 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:12.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.546 --rc genhtml_branch_coverage=1 00:41:12.546 --rc genhtml_function_coverage=1 00:41:12.546 --rc genhtml_legend=1 00:41:12.546 --rc geninfo_all_blocks=1 00:41:12.546 --rc geninfo_unexecuted_blocks=1 00:41:12.546 00:41:12.546 ' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:12.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.546 --rc genhtml_branch_coverage=1 00:41:12.546 --rc genhtml_function_coverage=1 00:41:12.546 --rc genhtml_legend=1 00:41:12.546 --rc geninfo_all_blocks=1 00:41:12.546 --rc geninfo_unexecuted_blocks=1 00:41:12.546 00:41:12.546 ' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:12.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.546 --rc genhtml_branch_coverage=1 00:41:12.546 --rc genhtml_function_coverage=1 00:41:12.546 --rc genhtml_legend=1 00:41:12.546 --rc geninfo_all_blocks=1 00:41:12.546 --rc geninfo_unexecuted_blocks=1 00:41:12.546 00:41:12.546 ' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:12.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.546 --rc genhtml_branch_coverage=1 00:41:12.546 --rc genhtml_function_coverage=1 00:41:12.546 --rc genhtml_legend=1 00:41:12.546 --rc geninfo_all_blocks=1 00:41:12.546 --rc geninfo_unexecuted_blocks=1 00:41:12.546 00:41:12.546 ' 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:12.546 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:12.547 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:12.547 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:12.547 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:12.547 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:12.547 15:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:41:12.547 15:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:19.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:19.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:19.119 Found net devices under 0000:86:00.0: cvl_0_0 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:19.119 Found net devices under 0000:86:00.1: cvl_0_1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:19.119 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:19.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:19.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:41:19.120 00:41:19.120 --- 10.0.0.2 ping statistics --- 00:41:19.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.120 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:19.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:19.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:41:19.120 00:41:19.120 --- 10.0.0.1 ping statistics --- 00:41:19.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.120 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4139547 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4139547 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 4139547 ']' 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:19.120 15:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:19.120 [2024-11-06 15:45:45.993976] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:19.120 [2024-11-06 15:45:45.996077] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:19.120 [2024-11-06 15:45:45.996144] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.120 [2024-11-06 15:45:46.127307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:19.120 [2024-11-06 15:45:46.233297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.120 [2024-11-06 15:45:46.233342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.120 [2024-11-06 15:45:46.233354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.120 [2024-11-06 15:45:46.233363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.120 [2024-11-06 15:45:46.233371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.120 [2024-11-06 15:45:46.235756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.120 [2024-11-06 15:45:46.235814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.120 [2024-11-06 15:45:46.235836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:19.120 [2024-11-06 15:45:46.548028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:19.120 [2024-11-06 15:45:46.549063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:19.120 [2024-11-06 15:45:46.549370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:19.120 [2024-11-06 15:45:46.549597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.379 15:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:19.638 [2024-11-06 15:45:47.024907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:19.638 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:19.897 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:19.897 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:20.155 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:20.155 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:20.414 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:20.414 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ed675d0f-352e-4e0c-86d0-d6a9448882bd 00:41:20.414 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ed675d0f-352e-4e0c-86d0-d6a9448882bd lvol 20 00:41:20.672 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=96cbdbca-7baa-42db-b0f1-7316f2fc97e2 00:41:20.672 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:20.931 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96cbdbca-7baa-42db-b0f1-7316f2fc97e2 00:41:20.931 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:21.190 [2024-11-06 15:45:48.724710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:21.190 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:21.450 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4140037 00:41:21.450 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:21.450 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:22.387 15:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 96cbdbca-7baa-42db-b0f1-7316f2fc97e2 MY_SNAPSHOT 00:41:22.723 15:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fc0d8133-d07d-425a-a8e4-c5959a065d4d 00:41:22.723 15:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 96cbdbca-7baa-42db-b0f1-7316f2fc97e2 30 00:41:23.017 15:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fc0d8133-d07d-425a-a8e4-c5959a065d4d MY_CLONE 00:41:23.330 15:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d1b598cc-3328-4733-b933-b5dae609eccb 00:41:23.330 15:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d1b598cc-3328-4733-b933-b5dae609eccb 00:41:23.898 15:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4140037 00:41:32.016 Initializing NVMe Controllers 00:41:32.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:32.016 Controller IO queue size 128, less than required. 00:41:32.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:32.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:32.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:32.016 Initialization complete. Launching workers. 00:41:32.016 ======================================================== 00:41:32.016 Latency(us) 00:41:32.016 Device Information : IOPS MiB/s Average min max 00:41:32.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11409.70 44.57 11218.99 263.63 179666.24 00:41:32.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11162.80 43.60 11465.34 2503.20 146455.58 00:41:32.016 ======================================================== 00:41:32.016 Total : 22572.50 88.17 11340.82 263.63 179666.24 00:41:32.016 00:41:32.016 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:32.274 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96cbdbca-7baa-42db-b0f1-7316f2fc97e2 00:41:32.274 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed675d0f-352e-4e0c-86d0-d6a9448882bd 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:32.532 rmmod nvme_tcp 00:41:32.532 rmmod nvme_fabrics 00:41:32.532 rmmod nvme_keyring 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4139547 ']' 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4139547 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 4139547 ']' 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 4139547 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:41:32.532 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4139547 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4139547' 00:41:32.791 killing process with pid 4139547 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 4139547 00:41:32.791 15:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 4139547 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.167 15:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.703 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.703 00:41:36.703 real 0m23.928s 00:41:36.703 user 0m58.498s 00:41:36.703 sys 0m9.655s 00:41:36.703 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:36.703 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:36.704 ************************************ 00:41:36.704 END TEST nvmf_lvol 00:41:36.704 ************************************ 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:36.704 ************************************ 00:41:36.704 START TEST nvmf_lvs_grow 00:41:36.704 ************************************ 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:36.704 * Looking for test storage... 00:41:36.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.704 --rc genhtml_branch_coverage=1 00:41:36.704 --rc genhtml_function_coverage=1 00:41:36.704 --rc genhtml_legend=1 00:41:36.704 --rc geninfo_all_blocks=1 00:41:36.704 --rc geninfo_unexecuted_blocks=1 00:41:36.704 00:41:36.704 ' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.704 --rc genhtml_branch_coverage=1 00:41:36.704 --rc genhtml_function_coverage=1 00:41:36.704 --rc genhtml_legend=1 00:41:36.704 --rc geninfo_all_blocks=1 00:41:36.704 --rc geninfo_unexecuted_blocks=1 00:41:36.704 00:41:36.704 ' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.704 --rc genhtml_branch_coverage=1 00:41:36.704 --rc genhtml_function_coverage=1 00:41:36.704 --rc genhtml_legend=1 00:41:36.704 --rc geninfo_all_blocks=1 00:41:36.704 --rc geninfo_unexecuted_blocks=1 00:41:36.704 00:41:36.704 ' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:36.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.704 --rc genhtml_branch_coverage=1 00:41:36.704 --rc genhtml_function_coverage=1 00:41:36.704 --rc genhtml_legend=1 00:41:36.704 --rc geninfo_all_blocks=1 00:41:36.704 --rc geninfo_unexecuted_blocks=1 00:41:36.704 00:41:36.704 ' 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.704 15:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.704 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.705 15:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.273 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:43.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:43.274 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:43.274 Found net devices under 0000:86:00.0: cvl_0_0 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:43.274 Found net devices under 0000:86:00.1: cvl_0_1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:43.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:43.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:41:43.274 00:41:43.274 --- 10.0.0.2 ping statistics --- 00:41:43.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.274 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:43.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:43.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:41:43.274 00:41:43.274 --- 10.0.0.1 ping statistics --- 00:41:43.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.274 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=4145406 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 4145406 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 4145406 ']' 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:43.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:43.274 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:43.274 [2024-11-06 15:46:10.010841] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:43.274 [2024-11-06 15:46:10.013163] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:43.274 [2024-11-06 15:46:10.013265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:43.274 [2024-11-06 15:46:10.146952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.274 [2024-11-06 15:46:10.257079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:43.275 [2024-11-06 15:46:10.257118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:43.275 [2024-11-06 15:46:10.257130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:43.275 [2024-11-06 15:46:10.257156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:43.275 [2024-11-06 15:46:10.257166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:43.275 [2024-11-06 15:46:10.258501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.275 [2024-11-06 15:46:10.579933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:43.275 [2024-11-06 15:46:10.580219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:43.275 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:43.533 [2024-11-06 15:46:11.023524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:43.533 ************************************ 00:41:43.533 START TEST lvs_grow_clean 00:41:43.533 ************************************ 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:43.533 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:43.791 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:43.791 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:44.049 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:44.049 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:44.049 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77e0e2c4-2000-4899-a105-029b6010f2d5 lvol 150 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e6a3080c-6949-4870-9ddb-8a9431f4010b 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:44.308 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:44.567 [2024-11-06 15:46:12.063336] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:44.567 [2024-11-06 15:46:12.063520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:44.567 true 00:41:44.567 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:44.567 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:44.826 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:44.826 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:44.826 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6a3080c-6949-4870-9ddb-8a9431f4010b 00:41:45.084 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:45.343 [2024-11-06 15:46:12.803790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.343 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:45.601 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4145908 00:41:45.601 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:45.601 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4145908 /var/tmp/bdevperf.sock 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 4145908 ']' 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:45.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:45.601 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:45.601 [2024-11-06 15:46:13.064179] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:41:45.601 [2024-11-06 15:46:13.064287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145908 ] 00:41:45.601 [2024-11-06 15:46:13.190684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.859 [2024-11-06 15:46:13.303479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:46.427 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:46.427 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:41:46.427 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:46.686 Nvme0n1 00:41:46.686 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:46.686 [ 00:41:46.686 { 00:41:46.686 "name": "Nvme0n1", 00:41:46.686 "aliases": [ 00:41:46.686 "e6a3080c-6949-4870-9ddb-8a9431f4010b" 00:41:46.686 ], 00:41:46.686 "product_name": "NVMe disk", 00:41:46.686 "block_size": 4096, 00:41:46.686 "num_blocks": 38912, 00:41:46.686 "uuid": "e6a3080c-6949-4870-9ddb-8a9431f4010b", 00:41:46.686 "numa_id": 1, 00:41:46.686 "assigned_rate_limits": { 00:41:46.686 "rw_ios_per_sec": 0, 00:41:46.686 "rw_mbytes_per_sec": 0, 00:41:46.686 "r_mbytes_per_sec": 0, 00:41:46.686 "w_mbytes_per_sec": 0 00:41:46.686 }, 00:41:46.686 "claimed": false, 00:41:46.686 "zoned": false, 00:41:46.686 "supported_io_types": { 00:41:46.686 "read": true, 00:41:46.686 "write": true, 00:41:46.686 "unmap": true, 00:41:46.686 "flush": true, 00:41:46.686 "reset": true, 00:41:46.686 "nvme_admin": true, 00:41:46.686 "nvme_io": true, 00:41:46.686 "nvme_io_md": false, 00:41:46.686 "write_zeroes": true, 00:41:46.686 "zcopy": false, 00:41:46.686 "get_zone_info": false, 00:41:46.686 "zone_management": false, 00:41:46.686 "zone_append": false, 00:41:46.686 "compare": true, 00:41:46.686 "compare_and_write": true, 00:41:46.686 "abort": true, 00:41:46.686 "seek_hole": false, 00:41:46.686 "seek_data": false, 00:41:46.686 "copy": true, 00:41:46.686 "nvme_iov_md": false 00:41:46.686 }, 00:41:46.686 "memory_domains": [ 00:41:46.686 { 00:41:46.686 "dma_device_id": "system", 00:41:46.686 "dma_device_type": 1 00:41:46.686 } 00:41:46.686 ], 00:41:46.686 "driver_specific": { 00:41:46.686 "nvme": [ 00:41:46.686 { 00:41:46.686 "trid": { 00:41:46.686 "trtype": "TCP", 00:41:46.686 "adrfam": "IPv4", 00:41:46.686 "traddr": "10.0.0.2", 00:41:46.686 "trsvcid": "4420", 00:41:46.686 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:46.686 }, 00:41:46.686 "ctrlr_data": { 00:41:46.686 "cntlid": 1, 00:41:46.686 "vendor_id": "0x8086", 00:41:46.686 "model_number": "SPDK bdev Controller", 00:41:46.686 "serial_number": "SPDK0", 00:41:46.686 "firmware_revision": "25.01", 00:41:46.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:46.686 "oacs": { 00:41:46.686 "security": 0, 00:41:46.686 "format": 0, 00:41:46.686 "firmware": 0, 00:41:46.686 "ns_manage": 0 00:41:46.686 }, 00:41:46.686 "multi_ctrlr": true, 00:41:46.686 "ana_reporting": false 00:41:46.686 }, 00:41:46.686 "vs": { 00:41:46.686 "nvme_version": "1.3" 00:41:46.686 }, 00:41:46.686 "ns_data": { 00:41:46.686 "id": 1, 00:41:46.686 "can_share": true 00:41:46.686 } 00:41:46.686 } 00:41:46.686 ], 00:41:46.686 "mp_policy": "active_passive" 00:41:46.686 } 00:41:46.686 } 00:41:46.686 ] 00:41:46.686 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4146142 00:41:46.686 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:46.686 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:46.945 Running I/O for 10 seconds... 00:41:47.881 Latency(us) 00:41:47.881 [2024-11-06T14:46:15.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:47.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:47.881 Nvme0n1 : 1.00 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:41:47.881 [2024-11-06T14:46:15.519Z] =================================================================================================================== 00:41:47.881 [2024-11-06T14:46:15.519Z] Total : 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:41:47.882 00:41:48.818 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:48.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.818 Nvme0n1 : 2.00 19939.00 77.89 0.00 0.00 0.00 0.00 0.00 00:41:48.818 [2024-11-06T14:46:16.456Z] =================================================================================================================== 00:41:48.818 [2024-11-06T14:46:16.457Z] Total : 19939.00 77.89 0.00 0.00 0.00 0.00 0.00 00:41:48.819 00:41:49.077 true 00:41:49.077 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:49.077 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:49.336 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:49.336 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:49.336 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4146142 00:41:49.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:49.904 Nvme0n1 : 3.00 20023.67 78.22 0.00 0.00 0.00 0.00 0.00 00:41:49.904 [2024-11-06T14:46:17.542Z] =================================================================================================================== 00:41:49.904 [2024-11-06T14:46:17.542Z] Total : 20023.67 78.22 0.00 0.00 0.00 0.00 0.00 00:41:49.904 00:41:50.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:50.838 Nvme0n1 : 4.00 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:41:50.838 [2024-11-06T14:46:18.476Z] =================================================================================================================== 00:41:50.838 [2024-11-06T14:46:18.476Z] Total : 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:41:50.838 00:41:52.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.218 Nvme0n1 : 5.00 20180.40 78.83 0.00 0.00 0.00 0.00 0.00 00:41:52.218 [2024-11-06T14:46:19.856Z] =================================================================================================================== 00:41:52.218 [2024-11-06T14:46:19.856Z] Total : 20180.40 78.83 0.00 0.00 0.00 0.00 0.00 00:41:52.218 00:41:52.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.785 Nvme0n1 : 6.00 20180.50 78.83 0.00 0.00 0.00 0.00 0.00 00:41:52.785 [2024-11-06T14:46:20.423Z] =================================================================================================================== 00:41:52.785 [2024-11-06T14:46:20.423Z] Total : 20180.50 78.83 0.00 0.00 0.00 0.00 0.00 00:41:52.785 00:41:54.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:54.160 Nvme0n1 : 7.00 20196.14 78.89 0.00 0.00 0.00 0.00 0.00 00:41:54.160 [2024-11-06T14:46:21.798Z] =================================================================================================================== 00:41:54.160 [2024-11-06T14:46:21.798Z] Total : 20196.14 78.89 0.00 0.00 0.00 0.00 0.00 00:41:54.160 00:41:55.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:55.183 Nvme0n1 : 8.00 20227.50 79.01 0.00 0.00 0.00 0.00 0.00 00:41:55.183 [2024-11-06T14:46:22.821Z] =================================================================================================================== 00:41:55.183 [2024-11-06T14:46:22.821Z] Total : 20227.50 79.01 0.00 0.00 0.00 0.00 0.00 00:41:55.183 00:41:56.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:56.118 Nvme0n1 : 9.00 20266.00 79.16 0.00 0.00 0.00 0.00 0.00 00:41:56.118 [2024-11-06T14:46:23.756Z] =================================================================================================================== 00:41:56.118 [2024-11-06T14:46:23.756Z] Total : 20266.00 79.16 0.00 0.00 0.00 0.00 0.00 00:41:56.118 00:41:57.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:57.054 Nvme0n1 : 10.00 20284.10 79.23 0.00 0.00 0.00 0.00 0.00 00:41:57.054 [2024-11-06T14:46:24.692Z] =================================================================================================================== 00:41:57.054 [2024-11-06T14:46:24.692Z] Total : 20284.10 79.23 0.00 0.00 0.00 0.00 0.00 00:41:57.054 00:41:57.054 00:41:57.054 Latency(us) 00:41:57.054 [2024-11-06T14:46:24.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:57.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:57.054 Nvme0n1 : 10.00 20289.10 79.25 0.00 0.00 6305.73 3557.67 29335.16 00:41:57.054 [2024-11-06T14:46:24.692Z] =================================================================================================================== 00:41:57.054 [2024-11-06T14:46:24.692Z] Total : 20289.10 79.25 0.00 0.00 6305.73 3557.67 29335.16 00:41:57.054 { 00:41:57.054 "results": [ 00:41:57.054 { 00:41:57.054 "job": "Nvme0n1", 00:41:57.054 "core_mask": "0x2", 00:41:57.054 "workload": "randwrite", 00:41:57.054 "status": "finished", 00:41:57.054 "queue_depth": 128, 00:41:57.054 "io_size": 4096, 00:41:57.054 "runtime": 10.003843, 00:41:57.054 "iops": 20289.10289775639, 00:41:57.054 "mibps": 79.2543081943609, 00:41:57.054 "io_failed": 0, 00:41:57.054 "io_timeout": 0, 00:41:57.054 "avg_latency_us": 6305.7321453499, 00:41:57.054 "min_latency_us": 3557.6685714285713, 00:41:57.054 "max_latency_us": 29335.161904761906 00:41:57.054 } 00:41:57.054 ], 00:41:57.054 "core_count": 1 00:41:57.054 } 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4145908 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 4145908 ']' 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 4145908 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4145908 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4145908' 00:41:57.055 killing process with pid 4145908 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 4145908 00:41:57.055 Received shutdown signal, test time was about 10.000000 seconds 00:41:57.055 00:41:57.055 Latency(us) 00:41:57.055 [2024-11-06T14:46:24.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:57.055 [2024-11-06T14:46:24.693Z] =================================================================================================================== 00:41:57.055 [2024-11-06T14:46:24.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:57.055 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 4145908 00:41:57.991 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:57.991 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:58.250 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:58.250 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:58.508 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:58.508 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:58.508 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:58.508 [2024-11-06 15:46:26.115330] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:58.767 request: 00:41:58.767 { 00:41:58.767 "uuid": "77e0e2c4-2000-4899-a105-029b6010f2d5", 00:41:58.767 "method": "bdev_lvol_get_lvstores", 00:41:58.767 "req_id": 1 00:41:58.767 } 00:41:58.767 Got JSON-RPC error response 00:41:58.767 response: 00:41:58.767 { 00:41:58.767 "code": -19, 00:41:58.767 "message": "No such device" 00:41:58.767 } 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:58.767 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:59.026 aio_bdev 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e6a3080c-6949-4870-9ddb-8a9431f4010b 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e6a3080c-6949-4870-9ddb-8a9431f4010b 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:41:59.026 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:59.284 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e6a3080c-6949-4870-9ddb-8a9431f4010b -t 2000 00:41:59.543 [ 00:41:59.543 { 00:41:59.543 "name": "e6a3080c-6949-4870-9ddb-8a9431f4010b", 00:41:59.543 "aliases": [ 00:41:59.543 "lvs/lvol" 00:41:59.543 ], 00:41:59.543 "product_name": "Logical Volume", 00:41:59.543 "block_size": 4096, 00:41:59.543 "num_blocks": 38912, 00:41:59.543 "uuid": "e6a3080c-6949-4870-9ddb-8a9431f4010b", 00:41:59.543 "assigned_rate_limits": { 00:41:59.543 "rw_ios_per_sec": 0, 00:41:59.543 "rw_mbytes_per_sec": 0, 00:41:59.543 "r_mbytes_per_sec": 0, 00:41:59.543 "w_mbytes_per_sec": 0 00:41:59.543 }, 00:41:59.543 "claimed": false, 00:41:59.543 "zoned": false, 00:41:59.543 "supported_io_types": { 00:41:59.543 "read": true, 00:41:59.543 "write": true, 00:41:59.543 "unmap": true, 00:41:59.543 "flush": false, 00:41:59.543 "reset": true, 00:41:59.543 "nvme_admin": false, 00:41:59.543 "nvme_io": false, 00:41:59.543 "nvme_io_md": false, 00:41:59.543 "write_zeroes": true, 00:41:59.543 "zcopy": false, 00:41:59.543 "get_zone_info": false, 00:41:59.543 "zone_management": false, 00:41:59.543 "zone_append": false, 00:41:59.543 "compare": false, 00:41:59.543 "compare_and_write": false, 00:41:59.543 "abort": false, 00:41:59.543 "seek_hole": true, 00:41:59.543 "seek_data": true, 00:41:59.543 "copy": false, 00:41:59.543 "nvme_iov_md": false 00:41:59.543 }, 00:41:59.543 "driver_specific": { 00:41:59.543 "lvol": { 00:41:59.543 "lvol_store_uuid": "77e0e2c4-2000-4899-a105-029b6010f2d5", 00:41:59.543 "base_bdev": "aio_bdev", 00:41:59.543 "thin_provision": false, 00:41:59.543 "num_allocated_clusters": 38, 00:41:59.543 "snapshot": false, 00:41:59.543 "clone": false, 00:41:59.543 "esnap_clone": false 00:41:59.543 } 00:41:59.543 } 00:41:59.543 } 00:41:59.543 ] 00:41:59.543 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:41:59.543 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:59.543 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:59.543 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:59.543 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:41:59.543 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:59.802 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:59.802 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6a3080c-6949-4870-9ddb-8a9431f4010b 00:42:00.061 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77e0e2c4-2000-4899-a105-029b6010f2d5 00:42:00.320 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:00.320 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:00.320 00:42:00.320 real 0m16.853s 00:42:00.320 user 0m16.525s 00:42:00.320 sys 0m1.498s 00:42:00.320 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:00.320 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:42:00.320 ************************************ 00:42:00.320 END TEST lvs_grow_clean 00:42:00.320 ************************************ 00:42:00.578 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:42:00.578 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:00.578 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:00.579 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:00.579 ************************************ 00:42:00.579 START TEST lvs_grow_dirty 00:42:00.579 ************************************ 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:00.579 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:00.837 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:42:00.837 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:42:00.837 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:00.837 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:00.837 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:42:01.096 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:42:01.096 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:42:01.096 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e lvol 150 00:42:01.355 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:01.355 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:01.355 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:42:01.613 [2024-11-06 15:46:28.995157] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:42:01.613 [2024-11-06 15:46:28.995351] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:42:01.613 true 00:42:01.613 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:01.613 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:42:01.613 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:42:01.613 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:01.872 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:02.131 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:02.131 [2024-11-06 15:46:29.739752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:02.132 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4148708 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4148708 /var/tmp/bdevperf.sock 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4148708 ']' 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:02.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:02.391 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:02.391 [2024-11-06 15:46:30.011763] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:02.391 [2024-11-06 15:46:30.011847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148708 ] 00:42:02.650 [2024-11-06 15:46:30.140882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.650 [2024-11-06 15:46:30.248794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:03.218 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:03.218 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:42:03.218 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:03.476 Nvme0n1 00:42:03.476 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:03.735 [ 00:42:03.735 { 00:42:03.735 "name": "Nvme0n1", 00:42:03.735 "aliases": [ 00:42:03.735 "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a" 00:42:03.735 ], 00:42:03.735 "product_name": "NVMe disk", 00:42:03.735 "block_size": 4096, 00:42:03.735 "num_blocks": 38912, 00:42:03.735 "uuid": "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a", 00:42:03.735 "numa_id": 1, 00:42:03.735 "assigned_rate_limits": { 00:42:03.735 "rw_ios_per_sec": 0, 00:42:03.735 "rw_mbytes_per_sec": 0, 00:42:03.735 "r_mbytes_per_sec": 0, 00:42:03.735 "w_mbytes_per_sec": 0 00:42:03.735 }, 00:42:03.735 "claimed": false, 00:42:03.735 "zoned": false, 00:42:03.735 "supported_io_types": { 00:42:03.735 "read": true, 00:42:03.735 "write": true, 00:42:03.735 "unmap": true, 00:42:03.735 "flush": true, 00:42:03.735 "reset": true, 00:42:03.735 "nvme_admin": true, 00:42:03.735 "nvme_io": true, 00:42:03.735 "nvme_io_md": false, 00:42:03.735 "write_zeroes": true, 00:42:03.735 "zcopy": false, 00:42:03.735 "get_zone_info": false, 00:42:03.735 "zone_management": false, 00:42:03.735 "zone_append": false, 00:42:03.735 "compare": true, 00:42:03.735 "compare_and_write": true, 00:42:03.735 "abort": true, 00:42:03.735 "seek_hole": false, 00:42:03.735 "seek_data": false, 00:42:03.735 "copy": true, 00:42:03.735 "nvme_iov_md": false 00:42:03.736 }, 00:42:03.736 "memory_domains": [ 00:42:03.736 { 00:42:03.736 "dma_device_id": "system", 00:42:03.736 "dma_device_type": 1 00:42:03.736 } 00:42:03.736 ], 00:42:03.736 "driver_specific": { 00:42:03.736 "nvme": [ 00:42:03.736 { 00:42:03.736 "trid": { 00:42:03.736 "trtype": "TCP", 00:42:03.736 "adrfam": "IPv4", 00:42:03.736 "traddr": "10.0.0.2", 00:42:03.736 "trsvcid": "4420", 00:42:03.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:03.736 }, 00:42:03.736 "ctrlr_data": { 00:42:03.736 "cntlid": 1, 00:42:03.736 "vendor_id": "0x8086", 00:42:03.736 "model_number": "SPDK bdev Controller", 00:42:03.736 "serial_number": "SPDK0", 00:42:03.736 "firmware_revision": "25.01", 00:42:03.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:03.736 "oacs": { 00:42:03.736 "security": 0, 00:42:03.736 "format": 0, 00:42:03.736 "firmware": 0, 00:42:03.736 "ns_manage": 0 00:42:03.736 }, 00:42:03.736 "multi_ctrlr": true, 00:42:03.736 "ana_reporting": false 00:42:03.736 }, 00:42:03.736 "vs": { 00:42:03.736 "nvme_version": "1.3" 00:42:03.736 }, 00:42:03.736 "ns_data": { 00:42:03.736 "id": 1, 00:42:03.736 "can_share": true 00:42:03.736 } 00:42:03.736 } 00:42:03.736 ], 00:42:03.736 "mp_policy": "active_passive" 00:42:03.736 } 00:42:03.736 } 00:42:03.736 ] 00:42:03.736 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4148943 00:42:03.736 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:03.736 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:03.995 Running I/O for 10 seconds... 00:42:04.932 Latency(us) 00:42:04.932 [2024-11-06T14:46:32.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:04.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:04.932 Nvme0n1 : 1.00 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:42:04.932 [2024-11-06T14:46:32.570Z] =================================================================================================================== 00:42:04.932 [2024-11-06T14:46:32.570Z] Total : 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:42:04.932 00:42:05.867 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:05.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:05.867 Nvme0n1 : 2.00 20002.50 78.13 0.00 0.00 0.00 0.00 0.00 00:42:05.867 [2024-11-06T14:46:33.505Z] =================================================================================================================== 00:42:05.867 [2024-11-06T14:46:33.505Z] Total : 20002.50 78.13 0.00 0.00 0.00 0.00 0.00 00:42:05.867 00:42:05.867 true 00:42:05.867 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:05.867 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:06.125 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:06.125 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:06.125 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4148943 00:42:07.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.061 Nvme0n1 : 3.00 20002.67 78.14 0.00 0.00 0.00 0.00 0.00 00:42:07.061 [2024-11-06T14:46:34.699Z] =================================================================================================================== 00:42:07.061 [2024-11-06T14:46:34.699Z] Total : 20002.67 78.14 0.00 0.00 0.00 0.00 0.00 00:42:07.061 00:42:07.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.997 Nvme0n1 : 4.00 20106.25 78.54 0.00 0.00 0.00 0.00 0.00 00:42:07.997 [2024-11-06T14:46:35.635Z] =================================================================================================================== 00:42:07.997 [2024-11-06T14:46:35.635Z] Total : 20106.25 78.54 0.00 0.00 0.00 0.00 0.00 00:42:07.997 00:42:08.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:08.931 Nvme0n1 : 5.00 20174.40 78.81 0.00 0.00 0.00 0.00 0.00 00:42:08.931 [2024-11-06T14:46:36.569Z] =================================================================================================================== 00:42:08.931 [2024-11-06T14:46:36.569Z] Total : 20174.40 78.81 0.00 0.00 0.00 0.00 0.00 00:42:08.931 00:42:09.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:09.867 Nvme0n1 : 6.00 20230.50 79.03 0.00 0.00 0.00 0.00 0.00 00:42:09.867 [2024-11-06T14:46:37.505Z] =================================================================================================================== 00:42:09.867 [2024-11-06T14:46:37.505Z] Total : 20230.50 79.03 0.00 0.00 0.00 0.00 0.00 00:42:09.867 00:42:10.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:10.804 Nvme0n1 : 7.00 20279.57 79.22 0.00 0.00 0.00 0.00 0.00 00:42:10.804 [2024-11-06T14:46:38.443Z] =================================================================================================================== 00:42:10.805 [2024-11-06T14:46:38.443Z] Total : 20279.57 79.22 0.00 0.00 0.00 0.00 0.00 00:42:10.805 00:42:12.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:12.182 Nvme0n1 : 8.00 20300.50 79.30 0.00 0.00 0.00 0.00 0.00 00:42:12.182 [2024-11-06T14:46:39.820Z] =================================================================================================================== 00:42:12.182 [2024-11-06T14:46:39.820Z] Total : 20300.50 79.30 0.00 0.00 0.00 0.00 0.00 00:42:12.182 00:42:13.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:13.119 Nvme0n1 : 9.00 20292.33 79.27 0.00 0.00 0.00 0.00 0.00 00:42:13.119 [2024-11-06T14:46:40.757Z] =================================================================================================================== 00:42:13.119 [2024-11-06T14:46:40.757Z] Total : 20292.33 79.27 0.00 0.00 0.00 0.00 0.00 00:42:13.119 00:42:14.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:14.120 Nvme0n1 : 10.00 20307.80 79.33 0.00 0.00 0.00 0.00 0.00 00:42:14.120 [2024-11-06T14:46:41.758Z] =================================================================================================================== 00:42:14.120 [2024-11-06T14:46:41.758Z] Total : 20307.80 79.33 0.00 0.00 0.00 0.00 0.00 00:42:14.120 00:42:14.120 00:42:14.120 Latency(us) 00:42:14.120 [2024-11-06T14:46:41.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:14.120 Nvme0n1 : 10.01 20306.27 79.32 0.00 0.00 6300.00 3073.95 29709.65 00:42:14.120 [2024-11-06T14:46:41.758Z] =================================================================================================================== 00:42:14.120 [2024-11-06T14:46:41.758Z] Total : 20306.27 79.32 0.00 0.00 6300.00 3073.95 29709.65 00:42:14.120 { 00:42:14.120 "results": [ 00:42:14.120 { 00:42:14.120 "job": "Nvme0n1", 00:42:14.120 "core_mask": "0x2", 00:42:14.120 "workload": "randwrite", 00:42:14.120 "status": "finished", 00:42:14.120 "queue_depth": 128, 00:42:14.120 "io_size": 4096, 00:42:14.120 "runtime": 10.007057, 00:42:14.120 "iops": 20306.26986535602, 00:42:14.120 "mibps": 79.32136666154695, 00:42:14.120 "io_failed": 0, 00:42:14.120 "io_timeout": 0, 00:42:14.120 "avg_latency_us": 6300.004423641409, 00:42:14.120 "min_latency_us": 3073.9504761904764, 00:42:14.120 "max_latency_us": 29709.653333333332 00:42:14.120 } 00:42:14.120 ], 00:42:14.120 "core_count": 1 00:42:14.120 } 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4148708 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 4148708 ']' 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 4148708 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4148708 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4148708' 00:42:14.120 killing process with pid 4148708 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 4148708 00:42:14.120 Received shutdown signal, test time was about 10.000000 seconds 00:42:14.120 00:42:14.120 Latency(us) 00:42:14.120 [2024-11-06T14:46:41.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.120 [2024-11-06T14:46:41.758Z] =================================================================================================================== 00:42:14.120 [2024-11-06T14:46:41.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:14.120 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 4148708 00:42:14.723 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:14.982 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:15.241 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:15.241 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4145406 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4145406 00:42:15.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4145406 Killed "${NVMF_APP[@]}" "$@" 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=4150783 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 4150783 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 4150783 ']' 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:15.501 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:15.501 [2024-11-06 15:46:43.053030] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:15.501 [2024-11-06 15:46:43.055065] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:15.501 [2024-11-06 15:46:43.055132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:15.760 [2024-11-06 15:46:43.185341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.760 [2024-11-06 15:46:43.286379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:15.760 [2024-11-06 15:46:43.286423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:15.760 [2024-11-06 15:46:43.286434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:15.760 [2024-11-06 15:46:43.286443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:15.760 [2024-11-06 15:46:43.286452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:15.760 [2024-11-06 15:46:43.287816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.019 [2024-11-06 15:46:43.610864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:16.019 [2024-11-06 15:46:43.611136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.278 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:16.537 [2024-11-06 15:46:44.086916] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:16.537 [2024-11-06 15:46:44.087276] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:16.537 [2024-11-06 15:46:44.087401] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:42:16.537 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:16.796 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e3d9a9c5-1673-4da5-aa84-13dc67d72d8a -t 2000 00:42:17.055 [ 00:42:17.055 { 00:42:17.055 "name": "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a", 00:42:17.055 "aliases": [ 00:42:17.055 "lvs/lvol" 00:42:17.055 ], 00:42:17.055 "product_name": "Logical Volume", 00:42:17.055 "block_size": 4096, 00:42:17.055 "num_blocks": 38912, 00:42:17.055 "uuid": "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a", 00:42:17.055 "assigned_rate_limits": { 00:42:17.055 "rw_ios_per_sec": 0, 00:42:17.055 "rw_mbytes_per_sec": 0, 00:42:17.055 "r_mbytes_per_sec": 0, 00:42:17.055 "w_mbytes_per_sec": 0 00:42:17.055 }, 00:42:17.055 "claimed": false, 00:42:17.055 "zoned": false, 00:42:17.055 "supported_io_types": { 00:42:17.055 "read": true, 00:42:17.055 "write": true, 00:42:17.055 "unmap": true, 00:42:17.055 "flush": false, 00:42:17.055 "reset": true, 00:42:17.055 "nvme_admin": false, 00:42:17.055 "nvme_io": false, 00:42:17.055 "nvme_io_md": false, 00:42:17.055 "write_zeroes": true, 00:42:17.055 "zcopy": false, 00:42:17.055 "get_zone_info": false, 00:42:17.055 "zone_management": false, 00:42:17.055 "zone_append": false, 00:42:17.055 "compare": false, 00:42:17.055 "compare_and_write": false, 00:42:17.055 "abort": false, 00:42:17.055 "seek_hole": true, 00:42:17.055 "seek_data": true, 00:42:17.055 "copy": false, 00:42:17.055 "nvme_iov_md": false 00:42:17.055 }, 00:42:17.055 "driver_specific": { 00:42:17.055 "lvol": { 00:42:17.055 "lvol_store_uuid": "458f3c39-065f-4c59-aeb1-b0f9acc8df0e", 00:42:17.055 "base_bdev": "aio_bdev", 00:42:17.055 "thin_provision": false, 00:42:17.055 "num_allocated_clusters": 38, 00:42:17.056 "snapshot": false, 00:42:17.056 "clone": false, 00:42:17.056 "esnap_clone": false 00:42:17.056 } 00:42:17.056 } 00:42:17.056 } 00:42:17.056 ] 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:17.056 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:17.315 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:17.315 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:17.575 [2024-11-06 15:46:45.044635] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:17.575 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:17.834 request: 00:42:17.834 { 00:42:17.834 "uuid": "458f3c39-065f-4c59-aeb1-b0f9acc8df0e", 00:42:17.834 "method": "bdev_lvol_get_lvstores", 00:42:17.834 "req_id": 1 00:42:17.834 } 00:42:17.834 Got JSON-RPC error response 00:42:17.834 response: 00:42:17.834 { 00:42:17.834 "code": -19, 00:42:17.834 "message": "No such device" 00:42:17.834 } 00:42:17.834 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:42:17.834 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:17.834 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:17.834 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:17.834 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:17.834 aio_bdev 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:18.093 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e3d9a9c5-1673-4da5-aa84-13dc67d72d8a -t 2000 00:42:18.353 [ 00:42:18.353 { 00:42:18.353 "name": "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a", 00:42:18.353 "aliases": [ 00:42:18.353 "lvs/lvol" 00:42:18.353 ], 00:42:18.353 "product_name": "Logical Volume", 00:42:18.353 "block_size": 4096, 00:42:18.353 "num_blocks": 38912, 00:42:18.353 "uuid": "e3d9a9c5-1673-4da5-aa84-13dc67d72d8a", 00:42:18.353 "assigned_rate_limits": { 00:42:18.353 "rw_ios_per_sec": 0, 00:42:18.353 "rw_mbytes_per_sec": 0, 00:42:18.353 "r_mbytes_per_sec": 0, 00:42:18.353 "w_mbytes_per_sec": 0 00:42:18.353 }, 00:42:18.353 "claimed": false, 00:42:18.353 "zoned": false, 00:42:18.353 "supported_io_types": { 00:42:18.353 "read": true, 00:42:18.353 "write": true, 00:42:18.353 "unmap": true, 00:42:18.353 "flush": false, 00:42:18.353 "reset": true, 00:42:18.353 "nvme_admin": false, 00:42:18.353 "nvme_io": false, 00:42:18.353 "nvme_io_md": false, 00:42:18.353 "write_zeroes": true, 00:42:18.353 "zcopy": false, 00:42:18.353 "get_zone_info": false, 00:42:18.353 "zone_management": false, 00:42:18.353 "zone_append": false, 00:42:18.353 "compare": false, 00:42:18.353 "compare_and_write": false, 00:42:18.353 "abort": false, 00:42:18.353 "seek_hole": true, 00:42:18.353 "seek_data": true, 00:42:18.353 "copy": false, 00:42:18.353 "nvme_iov_md": false 00:42:18.353 }, 00:42:18.353 "driver_specific": { 00:42:18.353 "lvol": { 00:42:18.353 "lvol_store_uuid": "458f3c39-065f-4c59-aeb1-b0f9acc8df0e", 00:42:18.353 "base_bdev": "aio_bdev", 00:42:18.353 "thin_provision": false, 00:42:18.353 "num_allocated_clusters": 38, 00:42:18.353 "snapshot": false, 00:42:18.353 "clone": false, 00:42:18.353 "esnap_clone": false 00:42:18.353 } 00:42:18.353 } 00:42:18.353 } 00:42:18.353 ] 00:42:18.353 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:42:18.353 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:18.353 15:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:18.612 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:18.612 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:18.612 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:18.871 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:18.871 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e3d9a9c5-1673-4da5-aa84-13dc67d72d8a 00:42:18.871 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 458f3c39-065f-4c59-aeb1-b0f9acc8df0e 00:42:19.129 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:19.388 00:42:19.388 real 0m18.855s 00:42:19.388 user 0m36.213s 00:42:19.388 sys 0m3.982s 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:19.388 ************************************ 00:42:19.388 END TEST lvs_grow_dirty 00:42:19.388 ************************************ 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:19.388 nvmf_trace.0 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.388 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.388 rmmod nvme_tcp 00:42:19.388 rmmod nvme_fabrics 00:42:19.388 rmmod nvme_keyring 00:42:19.388 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 4150783 ']' 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 4150783 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 4150783 ']' 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 4150783 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4150783 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4150783' 00:42:19.648 killing process with pid 4150783 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 4150783 00:42:19.648 15:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 4150783 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:20.585 15:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.121 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:23.121 00:42:23.121 real 0m46.429s 00:42:23.121 user 0m56.609s 00:42:23.121 sys 0m10.482s 00:42:23.121 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:23.121 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:23.121 ************************************ 00:42:23.121 END TEST nvmf_lvs_grow 00:42:23.121 ************************************ 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:23.122 ************************************ 00:42:23.122 START TEST nvmf_bdev_io_wait 00:42:23.122 ************************************ 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:23.122 * Looking for test storage... 00:42:23.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:23.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.122 --rc genhtml_branch_coverage=1 00:42:23.122 --rc genhtml_function_coverage=1 00:42:23.122 --rc genhtml_legend=1 00:42:23.122 --rc geninfo_all_blocks=1 00:42:23.122 --rc geninfo_unexecuted_blocks=1 00:42:23.122 00:42:23.122 ' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:23.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.122 --rc genhtml_branch_coverage=1 00:42:23.122 --rc genhtml_function_coverage=1 00:42:23.122 --rc genhtml_legend=1 00:42:23.122 --rc geninfo_all_blocks=1 00:42:23.122 --rc geninfo_unexecuted_blocks=1 00:42:23.122 00:42:23.122 ' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:23.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.122 --rc genhtml_branch_coverage=1 00:42:23.122 --rc genhtml_function_coverage=1 00:42:23.122 --rc genhtml_legend=1 00:42:23.122 --rc geninfo_all_blocks=1 00:42:23.122 --rc geninfo_unexecuted_blocks=1 00:42:23.122 00:42:23.122 ' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:23.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.122 --rc genhtml_branch_coverage=1 00:42:23.122 --rc genhtml_function_coverage=1 00:42:23.122 --rc genhtml_legend=1 00:42:23.122 --rc geninfo_all_blocks=1 00:42:23.122 --rc geninfo_unexecuted_blocks=1 00:42:23.122 00:42:23.122 ' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.122 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:23.123 15:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:29.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:29.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:29.694 Found net devices under 0000:86:00.0: cvl_0_0 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:29.694 Found net devices under 0000:86:00.1: cvl_0_1 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:29.694 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:29.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:29.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:42:29.695 00:42:29.695 --- 10.0.0.2 ping statistics --- 00:42:29.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:29.695 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:29.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:29.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:42:29.695 00:42:29.695 --- 10.0.0.1 ping statistics --- 00:42:29.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:29.695 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=4155062 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 4155062 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 4155062 ']' 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:29.695 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.695 [2024-11-06 15:46:56.563608] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:29.695 [2024-11-06 15:46:56.565754] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:29.695 [2024-11-06 15:46:56.565825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:29.695 [2024-11-06 15:46:56.695282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:29.695 [2024-11-06 15:46:56.805992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:29.695 [2024-11-06 15:46:56.806037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:29.695 [2024-11-06 15:46:56.806049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:29.695 [2024-11-06 15:46:56.806057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:29.695 [2024-11-06 15:46:56.806067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:29.695 [2024-11-06 15:46:56.808704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:29.695 [2024-11-06 15:46:56.808778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:29.695 [2024-11-06 15:46:56.808856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.695 [2024-11-06 15:46:56.808879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:29.695 [2024-11-06 15:46:56.809302] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.954 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.213 [2024-11-06 15:46:57.631169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:30.213 [2024-11-06 15:46:57.632255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:30.213 [2024-11-06 15:46:57.633257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:30.213 [2024-11-06 15:46:57.633649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.213 [2024-11-06 15:46:57.645526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.213 Malloc0 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.213 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:30.214 [2024-11-06 15:46:57.813907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4155311 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4155313 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.214 { 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme$subsystem", 00:42:30.214 "trtype": "$TEST_TRANSPORT", 00:42:30.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "$NVMF_PORT", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.214 "hdgst": ${hdgst:-false}, 00:42:30.214 "ddgst": ${ddgst:-false} 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 } 00:42:30.214 EOF 00:42:30.214 )") 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4155315 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.214 { 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme$subsystem", 00:42:30.214 "trtype": "$TEST_TRANSPORT", 00:42:30.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "$NVMF_PORT", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.214 "hdgst": ${hdgst:-false}, 00:42:30.214 "ddgst": ${ddgst:-false} 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 } 00:42:30.214 EOF 00:42:30.214 )") 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4155318 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.214 { 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme$subsystem", 00:42:30.214 "trtype": "$TEST_TRANSPORT", 00:42:30.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "$NVMF_PORT", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.214 "hdgst": ${hdgst:-false}, 00:42:30.214 "ddgst": ${ddgst:-false} 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 } 00:42:30.214 EOF 00:42:30.214 )") 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:30.214 { 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme$subsystem", 00:42:30.214 "trtype": "$TEST_TRANSPORT", 00:42:30.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "$NVMF_PORT", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:30.214 "hdgst": ${hdgst:-false}, 00:42:30.214 "ddgst": ${ddgst:-false} 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 } 00:42:30.214 EOF 00:42:30.214 )") 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4155311 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme1", 00:42:30.214 "trtype": "tcp", 00:42:30.214 "traddr": "10.0.0.2", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "4420", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.214 "hdgst": false, 00:42:30.214 "ddgst": false 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 }' 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme1", 00:42:30.214 "trtype": "tcp", 00:42:30.214 "traddr": "10.0.0.2", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "4420", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.214 "hdgst": false, 00:42:30.214 "ddgst": false 00:42:30.214 }, 00:42:30.214 "method": "bdev_nvme_attach_controller" 00:42:30.214 }' 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.214 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.214 "params": { 00:42:30.214 "name": "Nvme1", 00:42:30.214 "trtype": "tcp", 00:42:30.214 "traddr": "10.0.0.2", 00:42:30.214 "adrfam": "ipv4", 00:42:30.214 "trsvcid": "4420", 00:42:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.215 "hdgst": false, 00:42:30.215 "ddgst": false 00:42:30.215 }, 00:42:30.215 "method": "bdev_nvme_attach_controller" 00:42:30.215 }' 00:42:30.215 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:30.215 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:30.215 "params": { 00:42:30.215 "name": "Nvme1", 00:42:30.215 "trtype": "tcp", 00:42:30.215 "traddr": "10.0.0.2", 00:42:30.215 "adrfam": "ipv4", 00:42:30.215 "trsvcid": "4420", 00:42:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:30.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:30.215 "hdgst": false, 00:42:30.215 "ddgst": false 00:42:30.215 }, 00:42:30.215 "method": "bdev_nvme_attach_controller" 00:42:30.215 }' 00:42:30.474 [2024-11-06 15:46:57.891876] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:30.474 [2024-11-06 15:46:57.891974] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:42:30.474 [2024-11-06 15:46:57.895638] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:30.474 [2024-11-06 15:46:57.895723] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:30.474 [2024-11-06 15:46:57.895761] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:30.474 [2024-11-06 15:46:57.895836] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:30.474 [2024-11-06 15:46:57.896721] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:30.474 [2024-11-06 15:46:57.896793] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:42:30.733 [2024-11-06 15:46:58.128159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.733 [2024-11-06 15:46:58.232428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:30.733 [2024-11-06 15:46:58.253467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.733 [2024-11-06 15:46:58.299867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.733 [2024-11-06 15:46:58.352930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.733 [2024-11-06 15:46:58.369131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:30.992 [2024-11-06 15:46:58.399274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:30.992 [2024-11-06 15:46:58.459370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:31.251 Running I/O for 1 seconds... 00:42:31.251 Running I/O for 1 seconds... 00:42:31.251 Running I/O for 1 seconds... 00:42:31.510 Running I/O for 1 seconds... 00:42:32.077 7705.00 IOPS, 30.10 MiB/s 00:42:32.077 Latency(us) 00:42:32.077 [2024-11-06T14:46:59.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.077 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:32.077 Nvme1n1 : 1.02 7725.05 30.18 0.00 0.00 16501.27 4743.56 26838.55 00:42:32.077 [2024-11-06T14:46:59.715Z] =================================================================================================================== 00:42:32.077 [2024-11-06T14:46:59.715Z] Total : 7725.05 30.18 0.00 0.00 16501.27 4743.56 26838.55 00:42:32.335 7318.00 IOPS, 28.59 MiB/s 00:42:32.335 Latency(us) 00:42:32.335 [2024-11-06T14:46:59.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.335 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:32.335 Nvme1n1 : 1.01 7434.03 29.04 0.00 0.00 17171.50 4431.48 29959.31 00:42:32.335 [2024-11-06T14:46:59.973Z] =================================================================================================================== 00:42:32.335 [2024-11-06T14:46:59.973Z] Total : 7434.03 29.04 0.00 0.00 17171.50 4431.48 29959.31 00:42:32.335 223864.00 IOPS, 874.47 MiB/s 00:42:32.335 Latency(us) 00:42:32.335 [2024-11-06T14:46:59.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.335 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:32.335 Nvme1n1 : 1.00 223503.67 873.06 0.00 0.00 569.82 263.31 1599.39 00:42:32.335 [2024-11-06T14:46:59.973Z] =================================================================================================================== 00:42:32.335 [2024-11-06T14:46:59.973Z] Total : 223503.67 873.06 0.00 0.00 569.82 263.31 1599.39 00:42:32.335 10943.00 IOPS, 42.75 MiB/s 00:42:32.335 Latency(us) 00:42:32.335 [2024-11-06T14:46:59.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.335 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:32.335 Nvme1n1 : 1.01 11005.65 42.99 0.00 0.00 11594.21 4712.35 17226.61 00:42:32.335 [2024-11-06T14:46:59.973Z] =================================================================================================================== 00:42:32.335 [2024-11-06T14:46:59.973Z] Total : 11005.65 42.99 0.00 0.00 11594.21 4712.35 17226.61 00:42:32.902 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4155313 00:42:33.160 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4155315 00:42:33.160 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4155318 00:42:33.160 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:33.160 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:33.160 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:33.161 rmmod nvme_tcp 00:42:33.161 rmmod nvme_fabrics 00:42:33.161 rmmod nvme_keyring 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 4155062 ']' 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 4155062 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 4155062 ']' 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 4155062 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4155062 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4155062' 00:42:33.161 killing process with pid 4155062 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 4155062 00:42:33.161 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 4155062 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:34.097 15:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:36.632 00:42:36.632 real 0m13.459s 00:42:36.632 user 0m22.842s 00:42:36.632 sys 0m7.122s 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:36.632 ************************************ 00:42:36.632 END TEST nvmf_bdev_io_wait 00:42:36.632 ************************************ 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:36.632 ************************************ 00:42:36.632 START TEST nvmf_queue_depth 00:42:36.632 ************************************ 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:36.632 * Looking for test storage... 00:42:36.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:42:36.632 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.632 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.633 --rc genhtml_branch_coverage=1 00:42:36.633 --rc genhtml_function_coverage=1 00:42:36.633 --rc genhtml_legend=1 00:42:36.633 --rc geninfo_all_blocks=1 00:42:36.633 --rc geninfo_unexecuted_blocks=1 00:42:36.633 00:42:36.633 ' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.633 --rc genhtml_branch_coverage=1 00:42:36.633 --rc genhtml_function_coverage=1 00:42:36.633 --rc genhtml_legend=1 00:42:36.633 --rc geninfo_all_blocks=1 00:42:36.633 --rc geninfo_unexecuted_blocks=1 00:42:36.633 00:42:36.633 ' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.633 --rc genhtml_branch_coverage=1 00:42:36.633 --rc genhtml_function_coverage=1 00:42:36.633 --rc genhtml_legend=1 00:42:36.633 --rc geninfo_all_blocks=1 00:42:36.633 --rc geninfo_unexecuted_blocks=1 00:42:36.633 00:42:36.633 ' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.633 --rc genhtml_branch_coverage=1 00:42:36.633 --rc genhtml_function_coverage=1 00:42:36.633 --rc genhtml_legend=1 00:42:36.633 --rc geninfo_all_blocks=1 00:42:36.633 --rc geninfo_unexecuted_blocks=1 00:42:36.633 00:42:36.633 ' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:36.633 15:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:43.204 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:43.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:43.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:43.205 Found net devices under 0000:86:00.0: cvl_0_0 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:43.205 Found net devices under 0000:86:00.1: cvl_0_1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:43.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:43.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:42:43.205 00:42:43.205 --- 10.0.0.2 ping statistics --- 00:42:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.205 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:43.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:43.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:42:43.205 00:42:43.205 --- 10.0.0.1 ping statistics --- 00:42:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.205 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=4159327 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 4159327 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4159327 ']' 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.205 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:43.206 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.206 [2024-11-06 15:47:10.039997] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:43.206 [2024-11-06 15:47:10.042110] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:43.206 [2024-11-06 15:47:10.042177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:43.206 [2024-11-06 15:47:10.178642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.206 [2024-11-06 15:47:10.283137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:43.206 [2024-11-06 15:47:10.283179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:43.206 [2024-11-06 15:47:10.283193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:43.206 [2024-11-06 15:47:10.283207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:43.206 [2024-11-06 15:47:10.283218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:43.206 [2024-11-06 15:47:10.284622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.206 [2024-11-06 15:47:10.592891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:43.206 [2024-11-06 15:47:10.593170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 [2024-11-06 15:47:10.893695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.465 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 Malloc0 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.465 [2024-11-06 15:47:11.041451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4159497 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4159497 /var/tmp/bdevperf.sock 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 4159497 ']' 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:43.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:43.465 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:43.725 [2024-11-06 15:47:11.118091] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:42:43.725 [2024-11-06 15:47:11.118176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159497 ] 00:42:43.725 [2024-11-06 15:47:11.243570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.725 [2024-11-06 15:47:11.345998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.661 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:44.661 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:42:44.661 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:44.661 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.661 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:44.661 NVMe0n1 00:42:44.661 15:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.661 15:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:44.661 Running I/O for 10 seconds... 00:42:46.533 10247.00 IOPS, 40.03 MiB/s [2024-11-06T14:47:15.546Z] 10644.00 IOPS, 41.58 MiB/s [2024-11-06T14:47:16.481Z] 10591.33 IOPS, 41.37 MiB/s [2024-11-06T14:47:17.420Z] 10749.25 IOPS, 41.99 MiB/s [2024-11-06T14:47:18.357Z] 10770.00 IOPS, 42.07 MiB/s [2024-11-06T14:47:19.294Z] 10756.00 IOPS, 42.02 MiB/s [2024-11-06T14:47:20.228Z] 10805.71 IOPS, 42.21 MiB/s [2024-11-06T14:47:21.605Z] 10771.25 IOPS, 42.08 MiB/s [2024-11-06T14:47:22.173Z] 10806.22 IOPS, 42.21 MiB/s [2024-11-06T14:47:22.433Z] 10829.30 IOPS, 42.30 MiB/s 00:42:54.795 Latency(us) 00:42:54.795 [2024-11-06T14:47:22.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.795 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:54.795 Verification LBA range: start 0x0 length 0x4000 00:42:54.795 NVMe0n1 : 10.07 10847.38 42.37 0.00 0.00 94026.74 20597.03 60917.27 00:42:54.795 [2024-11-06T14:47:22.433Z] =================================================================================================================== 00:42:54.795 [2024-11-06T14:47:22.433Z] Total : 10847.38 42.37 0.00 0.00 94026.74 20597.03 60917.27 00:42:54.795 { 00:42:54.795 "results": [ 00:42:54.795 { 00:42:54.795 "job": "NVMe0n1", 00:42:54.795 "core_mask": "0x1", 00:42:54.795 "workload": "verify", 00:42:54.795 "status": "finished", 00:42:54.795 "verify_range": { 00:42:54.795 "start": 0, 00:42:54.795 "length": 16384 00:42:54.795 }, 00:42:54.795 "queue_depth": 1024, 00:42:54.795 "io_size": 4096, 00:42:54.795 "runtime": 10.074879, 00:42:54.795 "iops": 10847.375933745705, 00:42:54.795 "mibps": 42.37256224119416, 00:42:54.795 "io_failed": 0, 00:42:54.795 "io_timeout": 0, 00:42:54.795 "avg_latency_us": 94026.73717858689, 00:42:54.795 "min_latency_us": 20597.02857142857, 00:42:54.795 "max_latency_us": 60917.27238095238 00:42:54.795 } 00:42:54.795 ], 00:42:54.795 "core_count": 1 00:42:54.795 } 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4159497 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4159497 ']' 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4159497 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4159497 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4159497' 00:42:54.795 killing process with pid 4159497 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4159497 00:42:54.795 Received shutdown signal, test time was about 10.000000 seconds 00:42:54.795 00:42:54.795 Latency(us) 00:42:54.795 [2024-11-06T14:47:22.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.795 [2024-11-06T14:47:22.433Z] =================================================================================================================== 00:42:54.795 [2024-11-06T14:47:22.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:54.795 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4159497 00:42:55.732 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:55.732 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:55.733 rmmod nvme_tcp 00:42:55.733 rmmod nvme_fabrics 00:42:55.733 rmmod nvme_keyring 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 4159327 ']' 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 4159327 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 4159327 ']' 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 4159327 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4159327 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4159327' 00:42:55.733 killing process with pid 4159327 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 4159327 00:42:55.733 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 4159327 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.112 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.018 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:59.018 00:42:59.018 real 0m22.781s 00:42:59.018 user 0m26.837s 00:42:59.018 sys 0m6.721s 00:42:59.018 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:59.018 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:59.018 ************************************ 00:42:59.018 END TEST nvmf_queue_depth 00:42:59.018 ************************************ 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:59.278 ************************************ 00:42:59.278 START TEST nvmf_target_multipath 00:42:59.278 ************************************ 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:59.278 * Looking for test storage... 00:42:59.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:59.278 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:59.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.278 --rc genhtml_branch_coverage=1 00:42:59.278 --rc genhtml_function_coverage=1 00:42:59.278 --rc genhtml_legend=1 00:42:59.278 --rc geninfo_all_blocks=1 00:42:59.278 --rc geninfo_unexecuted_blocks=1 00:42:59.279 00:42:59.279 ' 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:59.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.279 --rc genhtml_branch_coverage=1 00:42:59.279 --rc genhtml_function_coverage=1 00:42:59.279 --rc genhtml_legend=1 00:42:59.279 --rc geninfo_all_blocks=1 00:42:59.279 --rc geninfo_unexecuted_blocks=1 00:42:59.279 00:42:59.279 ' 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:59.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.279 --rc genhtml_branch_coverage=1 00:42:59.279 --rc genhtml_function_coverage=1 00:42:59.279 --rc genhtml_legend=1 00:42:59.279 --rc geninfo_all_blocks=1 00:42:59.279 --rc geninfo_unexecuted_blocks=1 00:42:59.279 00:42:59.279 ' 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:59.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:59.279 --rc genhtml_branch_coverage=1 00:42:59.279 --rc genhtml_function_coverage=1 00:42:59.279 --rc genhtml_legend=1 00:42:59.279 --rc geninfo_all_blocks=1 00:42:59.279 --rc geninfo_unexecuted_blocks=1 00:42:59.279 00:42:59.279 ' 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:59.279 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:59.539 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:43:06.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:43:06.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:43:06.111 Found net devices under 0000:86:00.0: cvl_0_0 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:06.111 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:43:06.112 Found net devices under 0000:86:00.1: cvl_0_1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:06.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:06.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:43:06.112 00:43:06.112 --- 10.0.0.2 ping statistics --- 00:43:06.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.112 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:06.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:06.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:43:06.112 00:43:06.112 --- 10.0.0.1 ping statistics --- 00:43:06.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:06.112 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:43:06.112 only one NIC for nvmf test 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:06.112 rmmod nvme_tcp 00:43:06.112 rmmod nvme_fabrics 00:43:06.112 rmmod nvme_keyring 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:06.112 15:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:07.494 15:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:07.494 00:43:07.494 real 0m8.325s 00:43:07.494 user 0m1.764s 00:43:07.494 sys 0m4.542s 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:07.494 ************************************ 00:43:07.494 END TEST nvmf_target_multipath 00:43:07.494 ************************************ 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:07.494 ************************************ 00:43:07.494 START TEST nvmf_zcopy 00:43:07.494 ************************************ 00:43:07.494 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:07.790 * Looking for test storage... 00:43:07.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:07.790 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:07.790 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:07.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.791 --rc genhtml_branch_coverage=1 00:43:07.791 --rc genhtml_function_coverage=1 00:43:07.791 --rc genhtml_legend=1 00:43:07.791 --rc geninfo_all_blocks=1 00:43:07.791 --rc geninfo_unexecuted_blocks=1 00:43:07.791 00:43:07.791 ' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:07.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.791 --rc genhtml_branch_coverage=1 00:43:07.791 --rc genhtml_function_coverage=1 00:43:07.791 --rc genhtml_legend=1 00:43:07.791 --rc geninfo_all_blocks=1 00:43:07.791 --rc geninfo_unexecuted_blocks=1 00:43:07.791 00:43:07.791 ' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:07.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.791 --rc genhtml_branch_coverage=1 00:43:07.791 --rc genhtml_function_coverage=1 00:43:07.791 --rc genhtml_legend=1 00:43:07.791 --rc geninfo_all_blocks=1 00:43:07.791 --rc geninfo_unexecuted_blocks=1 00:43:07.791 00:43:07.791 ' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:07.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:07.791 --rc genhtml_branch_coverage=1 00:43:07.791 --rc genhtml_function_coverage=1 00:43:07.791 --rc genhtml_legend=1 00:43:07.791 --rc geninfo_all_blocks=1 00:43:07.791 --rc geninfo_unexecuted_blocks=1 00:43:07.791 00:43:07.791 ' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.791 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:43:07.792 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:43:14.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:14.458 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:43:14.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:43:14.459 Found net devices under 0000:86:00.0: cvl_0_0 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:43:14.459 Found net devices under 0000:86:00.1: cvl_0_1 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:14.459 15:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:14.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:14.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:43:14.459 00:43:14.459 --- 10.0.0.2 ping statistics --- 00:43:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.459 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:14.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:14.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:43:14.459 00:43:14.459 --- 10.0.0.1 ping statistics --- 00:43:14.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:14.459 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=4168455 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 4168455 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 4168455 ']' 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:14.459 15:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.459 [2024-11-06 15:47:41.299100] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:14.459 [2024-11-06 15:47:41.301135] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:43:14.459 [2024-11-06 15:47:41.301200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:14.459 [2024-11-06 15:47:41.429948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.459 [2024-11-06 15:47:41.530799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:14.459 [2024-11-06 15:47:41.530838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:14.459 [2024-11-06 15:47:41.530849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:14.459 [2024-11-06 15:47:41.530857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:14.459 [2024-11-06 15:47:41.530866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:14.459 [2024-11-06 15:47:41.532277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:14.460 [2024-11-06 15:47:41.838090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:14.460 [2024-11-06 15:47:41.838372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 [2024-11-06 15:47:42.141337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 [2024-11-06 15:47:42.169662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 malloc0 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.719 { 00:43:14.719 "params": { 00:43:14.719 "name": "Nvme$subsystem", 00:43:14.719 "trtype": "$TEST_TRANSPORT", 00:43:14.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.719 "adrfam": "ipv4", 00:43:14.719 "trsvcid": "$NVMF_PORT", 00:43:14.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.719 "hdgst": ${hdgst:-false}, 00:43:14.719 "ddgst": ${ddgst:-false} 00:43:14.719 }, 00:43:14.719 "method": "bdev_nvme_attach_controller" 00:43:14.719 } 00:43:14.719 EOF 00:43:14.719 )") 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:14.719 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:14.720 15:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:14.720 "params": { 00:43:14.720 "name": "Nvme1", 00:43:14.720 "trtype": "tcp", 00:43:14.720 "traddr": "10.0.0.2", 00:43:14.720 "adrfam": "ipv4", 00:43:14.720 "trsvcid": "4420", 00:43:14.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:14.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:14.720 "hdgst": false, 00:43:14.720 "ddgst": false 00:43:14.720 }, 00:43:14.720 "method": "bdev_nvme_attach_controller" 00:43:14.720 }' 00:43:14.720 [2024-11-06 15:47:42.325801] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:43:14.720 [2024-11-06 15:47:42.325878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168641 ] 00:43:14.979 [2024-11-06 15:47:42.450835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.979 [2024-11-06 15:47:42.553659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.546 Running I/O for 10 seconds... 00:43:17.420 7273.00 IOPS, 56.82 MiB/s [2024-11-06T14:47:45.995Z] 7341.00 IOPS, 57.35 MiB/s [2024-11-06T14:47:47.372Z] 7357.00 IOPS, 57.48 MiB/s [2024-11-06T14:47:48.309Z] 7373.00 IOPS, 57.60 MiB/s [2024-11-06T14:47:49.246Z] 7374.60 IOPS, 57.61 MiB/s [2024-11-06T14:47:50.181Z] 7380.00 IOPS, 57.66 MiB/s [2024-11-06T14:47:51.118Z] 7388.57 IOPS, 57.72 MiB/s [2024-11-06T14:47:52.055Z] 7374.12 IOPS, 57.61 MiB/s [2024-11-06T14:47:53.431Z] 7377.33 IOPS, 57.64 MiB/s [2024-11-06T14:47:53.431Z] 7379.50 IOPS, 57.65 MiB/s 00:43:25.793 Latency(us) 00:43:25.793 [2024-11-06T14:47:53.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.793 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:25.793 Verification LBA range: start 0x0 length 0x1000 00:43:25.793 Nvme1n1 : 10.05 7352.85 57.44 0.00 0.00 17296.20 3354.82 45188.63 00:43:25.793 [2024-11-06T14:47:53.431Z] =================================================================================================================== 00:43:25.793 [2024-11-06T14:47:53.431Z] Total : 7352.85 57.44 0.00 0.00 17296.20 3354.82 45188.63 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4170400 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:26.360 { 00:43:26.360 "params": { 00:43:26.360 "name": "Nvme$subsystem", 00:43:26.360 "trtype": "$TEST_TRANSPORT", 00:43:26.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:26.360 "adrfam": "ipv4", 00:43:26.360 "trsvcid": "$NVMF_PORT", 00:43:26.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:26.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:26.360 "hdgst": ${hdgst:-false}, 00:43:26.360 "ddgst": ${ddgst:-false} 00:43:26.360 }, 00:43:26.360 "method": "bdev_nvme_attach_controller" 00:43:26.360 } 00:43:26.360 EOF 00:43:26.360 )") 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:43:26.360 [2024-11-06 15:47:53.928865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.928905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:43:26.360 15:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:26.360 "params": { 00:43:26.360 "name": "Nvme1", 00:43:26.360 "trtype": "tcp", 00:43:26.360 "traddr": "10.0.0.2", 00:43:26.360 "adrfam": "ipv4", 00:43:26.360 "trsvcid": "4420", 00:43:26.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:26.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:26.360 "hdgst": false, 00:43:26.360 "ddgst": false 00:43:26.360 }, 00:43:26.360 "method": "bdev_nvme_attach_controller" 00:43:26.360 }' 00:43:26.360 [2024-11-06 15:47:53.940846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.940872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 [2024-11-06 15:47:53.952816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.952839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 [2024-11-06 15:47:53.964828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.964858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 [2024-11-06 15:47:53.976818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.976840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 [2024-11-06 15:47:53.988807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.360 [2024-11-06 15:47:53.988829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.360 [2024-11-06 15:47:53.993021] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:43:26.360 [2024-11-06 15:47:53.993094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170400 ] 00:43:26.619 [2024-11-06 15:47:54.000817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.619 [2024-11-06 15:47:54.000839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.619 [2024-11-06 15:47:54.012822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.619 [2024-11-06 15:47:54.012846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.619 [2024-11-06 15:47:54.024801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.024822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.036822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.036843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.048803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.048823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.060813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.060832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.072813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.072832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.084801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.084819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.096812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.096832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.108817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.108836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.116532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.620 [2024-11-06 15:47:54.120809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.120829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.132821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.132842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.144804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.144824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.156809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.156831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.168809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.168827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.180799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.180818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.192807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.192825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.204809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.204829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.216803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.216822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.228805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.228823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.228870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:26.620 [2024-11-06 15:47:54.240801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.240820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.620 [2024-11-06 15:47:54.252837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.620 [2024-11-06 15:47:54.252858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.264817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.264837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.276806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.276825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.288810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.288829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.300808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.300826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.312799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.312817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.324819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.324840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.336809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.336829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.348818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.348837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.360809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.360828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.372798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.372816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.384827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.384849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.396822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.396840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.408807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.408825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.420815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.420834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.432797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.432817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.444809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.444827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.456824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.456845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.468800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.468819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.480809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.480828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.492816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.492835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.879 [2024-11-06 15:47:54.504797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.879 [2024-11-06 15:47:54.504815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.516830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.516853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.528800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.528819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.540819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.540839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.552807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.552825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.564803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.564825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.576812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.576832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.588808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.588829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.600805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.600826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.612816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.612839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.624804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.624826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 Running I/O for 5 seconds... 00:43:27.139 [2024-11-06 15:47:54.643955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.643980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.659336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.659360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.675983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.676007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.691110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.691134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.707548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.707573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.723439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.723464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.740094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.740118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.752110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.752134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.139 [2024-11-06 15:47:54.767607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.139 [2024-11-06 15:47:54.767632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.783211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.783236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.799925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.799948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.812920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.812944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.825344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.825368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.837255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.837294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.854943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.854967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.871141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.871165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.887191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.887225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.903841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.903866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.916980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.917003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.929735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.929757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.941326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.941349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.958611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.958635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.974487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.974510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:54.991466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:54.991489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:55.006859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:55.006883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.398 [2024-11-06 15:47:55.023519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.398 [2024-11-06 15:47:55.023542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.039161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.039185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.056134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.056158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.069705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.069729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.087028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.087053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.103572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.103595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.117666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.117689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.135107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.135131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.150927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.150951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.167408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.167431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.183338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.183362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.199244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.199270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.216219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.216244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.229373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.229397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.246731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.246756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.262429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.262464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.279566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.279590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.294284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.294307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.311583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.311606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.325044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.325067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.714 [2024-11-06 15:47:55.338136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.714 [2024-11-06 15:47:55.338160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.355531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.355556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.370335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.370358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.387583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.387608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.400341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.400364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.415279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.415303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.431707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.431732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.447438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.447463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.464044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.464070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.477142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.477170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.489861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.489885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.507070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.973 [2024-11-06 15:47:55.507094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.973 [2024-11-06 15:47:55.522993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.523018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.974 [2024-11-06 15:47:55.538635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.538659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.974 [2024-11-06 15:47:55.555578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.555603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.974 [2024-11-06 15:47:55.570726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.570751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.974 [2024-11-06 15:47:55.587872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.587896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.974 [2024-11-06 15:47:55.602043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.974 [2024-11-06 15:47:55.602067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.619293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.619317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.632303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.632327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 14229.00 IOPS, 111.16 MiB/s [2024-11-06T14:47:55.871Z] [2024-11-06 15:47:55.647617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.647641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.663700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.663724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.680092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.680116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.694760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.694784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.711666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.711689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.726025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.726049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.743547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.743571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.756221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.756245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.770973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.771003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.787792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.787817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.803566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.803591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.819084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.819107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.835881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.835905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.849475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.849500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.233 [2024-11-06 15:47:55.867448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.233 [2024-11-06 15:47:55.867472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.883665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.883689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.897849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.897873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.915243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.915267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.930540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.930565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.948149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.948173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.961710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.961733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.978877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.978900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:55.993809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:55.993833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.011745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.011768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.024886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.024909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.039521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.039544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.056131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.056155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.069713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.069741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.087225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.087265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.100446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.100469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.492 [2024-11-06 15:47:56.115198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.492 [2024-11-06 15:47:56.115231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.132080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.132105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.144330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.144353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.159046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.159070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.176095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.176119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.188063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.188087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.203186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.203217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.219943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.219966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.232249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.232273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.247632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.247655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.261100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.751 [2024-11-06 15:47:56.261123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.751 [2024-11-06 15:47:56.274217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.274256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.291562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.291586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.304754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.304778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.319252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.319276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.335424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.335448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.349324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.349347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.366666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.366690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.752 [2024-11-06 15:47:56.382476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.752 [2024-11-06 15:47:56.382500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.400142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.400167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.413223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.413247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.430146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.430169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.447046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.447070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.463289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.463313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.478668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.478692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.496340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.496364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.509723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.509746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.526927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.526951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.542078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.542102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.559452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.559488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.572883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.572906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.585858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.585882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.597337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.597360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.614686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.614710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.011 [2024-11-06 15:47:56.630536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.011 [2024-11-06 15:47:56.630560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 14210.50 IOPS, 111.02 MiB/s [2024-11-06T14:47:56.908Z] [2024-11-06 15:47:56.647884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.270 [2024-11-06 15:47:56.647909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 [2024-11-06 15:47:56.661118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.270 [2024-11-06 15:47:56.661142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 [2024-11-06 15:47:56.673419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.270 [2024-11-06 15:47:56.673443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 [2024-11-06 15:47:56.686159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.270 [2024-11-06 15:47:56.686184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 [2024-11-06 15:47:56.703618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.270 [2024-11-06 15:47:56.703643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.270 [2024-11-06 15:47:56.717021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.717046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.729903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.729928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.746983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.747008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.761321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.761343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.778764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.778790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.794805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.794830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.812067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.812092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.825013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.825037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.837581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.837604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.849603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.849627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.866795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.866820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.882604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.882628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.271 [2024-11-06 15:47:56.899577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.271 [2024-11-06 15:47:56.899602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.912264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.912288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.926735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.926761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.943092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.943116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.960061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.960087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.973687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.973712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:56.991448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:56.991472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.004678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.004703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.019543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.019567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.035607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.035631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.051344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.051368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.067506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.067530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.083583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.083607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.098852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.098877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.115606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.115630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.128809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.128833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.143384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.143407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.530 [2024-11-06 15:47:57.158414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.530 [2024-11-06 15:47:57.158437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.175770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.175795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.187764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.187788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.203181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.203216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.219556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.219580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.232023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.232047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.247375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.247398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.263549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.263572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.280149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.280174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.293706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.293730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.310905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.789 [2024-11-06 15:47:57.310930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.789 [2024-11-06 15:47:57.326353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.326376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.343673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.343697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.357196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.357225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.374805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.374829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.392125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.392149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.404561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.404584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:29.790 [2024-11-06 15:47:57.419244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:29.790 [2024-11-06 15:47:57.419268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.435296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.435321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.452260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.452283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.465415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.465438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.483085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.483109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.498276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.498305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.515578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.515602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.528690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.528715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.543572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.543596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.559627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.559650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.575642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.575666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.591524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.591548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.607832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.607856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.622156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.622180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.639562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.639586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 14220.00 IOPS, 111.09 MiB/s [2024-11-06T14:47:57.687Z] [2024-11-06 15:47:57.654041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.654066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.049 [2024-11-06 15:47:57.671534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.049 [2024-11-06 15:47:57.671558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.686325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.686349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.703765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.703789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.716976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.717000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.729707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.729731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.747317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.747342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.761052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.761075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.773707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.773731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.790999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.791028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.806825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.806849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.823561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.823584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.837098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.837121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.849334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.849357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.866985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.867010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.882536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.882559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.899575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.308 [2024-11-06 15:47:57.899599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.308 [2024-11-06 15:47:57.914874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.309 [2024-11-06 15:47:57.914898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.309 [2024-11-06 15:47:57.931559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.309 [2024-11-06 15:47:57.931584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:57.947676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:57.947701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:57.962149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:57.962174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:57.979864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:57.979890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:57.993311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:57.993335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.010860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.010884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.024965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.024988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.037102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.037124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.049456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.049479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.061225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.061248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.074043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.074066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.091570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.091593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.104628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.104651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.117748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.117772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.134962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.134987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.150895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.150918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.166765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.166790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.183822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.183849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.568 [2024-11-06 15:47:58.196272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.568 [2024-11-06 15:47:58.196297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.210900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.210925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.228050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.228074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.239853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.239877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.255441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.255466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.271284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.271308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.287900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.287925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.300755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.300779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.315325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.315348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.332372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.332396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.345787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.345812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.363128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.363152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.378996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.379020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.395692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.395717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.410198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.410229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.427355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.427379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.442742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.442766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:30.828 [2024-11-06 15:47:58.459752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:30.828 [2024-11-06 15:47:58.459776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.472905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.472929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.487476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.487501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.501726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.501750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.518850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.518875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.535279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.535303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.550180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.550210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.567331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.567356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.583355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.583379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.087 [2024-11-06 15:47:58.597251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.087 [2024-11-06 15:47:58.597274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.610084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.610108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.626951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.626974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.643673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.643697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 14234.00 IOPS, 111.20 MiB/s [2024-11-06T14:47:58.726Z] [2024-11-06 15:47:58.659503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.659526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.674887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.674911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.691772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.691796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.705115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.705138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.088 [2024-11-06 15:47:58.722614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.088 [2024-11-06 15:47:58.722638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.738817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.738842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.755420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.755444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.770401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.770424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.787719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.787743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.802633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.802656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.819418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.819441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.833522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.833545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.851108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.851131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.865628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.865651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.882575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.882599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.899974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.899997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.911665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.911689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.927687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.927711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.943443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.943471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.959810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.959834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.347 [2024-11-06 15:47:58.973270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.347 [2024-11-06 15:47:58.973293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:58.986063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:58.986087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.003360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.003385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.016527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.016552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.031199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.031229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.047404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.047429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.060770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.060794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.075009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.075032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.091735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.091760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.106423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.106447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.124188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.124218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.137779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.137803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.155209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.155232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.170351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.170374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.188415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.188438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.199247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.199270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.215289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.215313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.606 [2024-11-06 15:47:59.231756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.606 [2024-11-06 15:47:59.231784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.247954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.247978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.261351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.261374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.274085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.274110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.291941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.291965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.303909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.303933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.319785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.319808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.334317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.334340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.351666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.865 [2024-11-06 15:47:59.351689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.865 [2024-11-06 15:47:59.364883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.364906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.377694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.377717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.394642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.394665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.411351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.411374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.426545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.426568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.444052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.444076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.456101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.456125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.471447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.471470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:31.866 [2024-11-06 15:47:59.487235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:31.866 [2024-11-06 15:47:59.487258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.503743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.503766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.517046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.517074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.528933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.528955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.541630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.541653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.558879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.558902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.575974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.575997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.588842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.588864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.601678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.601701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.618936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.618960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.634769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.634796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 14247.40 IOPS, 111.31 MiB/s [2024-11-06T14:47:59.763Z] [2024-11-06 15:47:59.650873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.650898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 00:43:32.125 Latency(us) 00:43:32.125 [2024-11-06T14:47:59.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.125 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:32.125 Nvme1n1 : 5.01 14248.54 111.32 0.00 0.00 8973.81 2309.36 15042.07 00:43:32.125 [2024-11-06T14:47:59.763Z] =================================================================================================================== 00:43:32.125 [2024-11-06T14:47:59.763Z] Total : 14248.54 111.32 0.00 0.00 8973.81 2309.36 15042.07 00:43:32.125 [2024-11-06 15:47:59.660815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.660838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.672807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.672829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.684796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.684815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.696809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.696828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.708847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.708874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.720814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.720834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.732819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.732839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.744804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.744823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.125 [2024-11-06 15:47:59.756817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.125 [2024-11-06 15:47:59.756837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.768816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.768835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.780795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.780814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.792811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.792830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.804820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.804841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.816800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.816819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.828814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.828834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.840798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.840815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.852816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.852835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.864808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.864826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.876798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.876817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.888811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.888830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.900808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.900826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.912807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.912824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.924809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.924828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.936793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.936811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.948807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.948826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.960808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.960826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.972810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.972831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.984817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.984836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:47:59.996808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:47:59.996827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.384 [2024-11-06 15:48:00.008811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.384 [2024-11-06 15:48:00.008833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.020818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.020841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.032801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.032822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.044846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.044874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.056879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.056909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.068810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.068833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.080811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.080830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.092817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.092838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.104832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.104856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.116813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.116833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.128798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.128818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.140810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.140830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.152812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.152832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.164796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.164814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.176810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.176829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.188806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.188824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.200820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.200839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.212811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.212829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.224927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.224947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.236814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.236833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.248803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.248821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.260802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.260822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.643 [2024-11-06 15:48:00.272807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.643 [2024-11-06 15:48:00.272825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.902 [2024-11-06 15:48:00.284809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.902 [2024-11-06 15:48:00.284827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.902 [2024-11-06 15:48:00.296795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.902 [2024-11-06 15:48:00.296813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.308807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.308825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.320806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.320827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.332814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.332834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.344818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.344836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.356799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.356818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.368813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.368833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.380806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.380825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.392795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.392813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.404809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.404831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.416807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.416826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.428808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.428826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.440804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.440824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.452795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.452813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.464828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.464847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.476807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.476825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.488816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.488835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.500815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.500835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.512799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.512818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 [2024-11-06 15:48:00.524815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:32.903 [2024-11-06 15:48:00.524834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:32.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4170400) - No such process 00:43:32.903 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4170400 00:43:32.903 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:32.903 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:32.903 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:33.162 delay0 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:33.162 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:33.162 [2024-11-06 15:48:00.725110] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:41.280 Initializing NVMe Controllers 00:43:41.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:41.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:41.280 Initialization complete. Launching workers. 00:43:41.280 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 24256 00:43:41.280 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24365, failed to submit 130 00:43:41.280 success 24280, unsuccessful 85, failed 0 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:41.280 rmmod nvme_tcp 00:43:41.280 rmmod nvme_fabrics 00:43:41.280 rmmod nvme_keyring 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 4168455 ']' 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 4168455 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 4168455 ']' 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 4168455 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4168455 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4168455' 00:43:41.280 killing process with pid 4168455 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 4168455 00:43:41.280 15:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 4168455 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:41.848 15:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:43.752 00:43:43.752 real 0m36.152s 00:43:43.752 user 0m47.480s 00:43:43.752 sys 0m13.185s 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:43.752 ************************************ 00:43:43.752 END TEST nvmf_zcopy 00:43:43.752 ************************************ 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:43.752 ************************************ 00:43:43.752 START TEST nvmf_nmic 00:43:43.752 ************************************ 00:43:43.752 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:44.012 * Looking for test storage... 00:43:44.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.012 --rc genhtml_branch_coverage=1 00:43:44.012 --rc genhtml_function_coverage=1 00:43:44.012 --rc genhtml_legend=1 00:43:44.012 --rc geninfo_all_blocks=1 00:43:44.012 --rc geninfo_unexecuted_blocks=1 00:43:44.012 00:43:44.012 ' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.012 --rc genhtml_branch_coverage=1 00:43:44.012 --rc genhtml_function_coverage=1 00:43:44.012 --rc genhtml_legend=1 00:43:44.012 --rc geninfo_all_blocks=1 00:43:44.012 --rc geninfo_unexecuted_blocks=1 00:43:44.012 00:43:44.012 ' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.012 --rc genhtml_branch_coverage=1 00:43:44.012 --rc genhtml_function_coverage=1 00:43:44.012 --rc genhtml_legend=1 00:43:44.012 --rc geninfo_all_blocks=1 00:43:44.012 --rc geninfo_unexecuted_blocks=1 00:43:44.012 00:43:44.012 ' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:44.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.012 --rc genhtml_branch_coverage=1 00:43:44.012 --rc genhtml_function_coverage=1 00:43:44.012 --rc genhtml_legend=1 00:43:44.012 --rc geninfo_all_blocks=1 00:43:44.012 --rc geninfo_unexecuted_blocks=1 00:43:44.012 00:43:44.012 ' 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.012 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:44.013 15:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:43:50.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:43:50.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:43:50.582 Found net devices under 0000:86:00.0: cvl_0_0 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:50.582 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:43:50.583 Found net devices under 0000:86:00.1: cvl_0_1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:50.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:50.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:43:50.583 00:43:50.583 --- 10.0.0.2 ping statistics --- 00:43:50.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:50.583 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:50.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:50.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:43:50.583 00:43:50.583 --- 10.0.0.1 ping statistics --- 00:43:50.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:50.583 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=4176643 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 4176643 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 4176643 ']' 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:50.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:50.583 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:50.583 [2024-11-06 15:48:17.527492] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:50.583 [2024-11-06 15:48:17.529610] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:43:50.583 [2024-11-06 15:48:17.529681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:50.583 [2024-11-06 15:48:17.660386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:50.583 [2024-11-06 15:48:17.772571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:50.583 [2024-11-06 15:48:17.772608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:50.583 [2024-11-06 15:48:17.772621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:50.583 [2024-11-06 15:48:17.772631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:50.583 [2024-11-06 15:48:17.772640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:50.583 [2024-11-06 15:48:17.775121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:50.583 [2024-11-06 15:48:17.777245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:50.583 [2024-11-06 15:48:17.777346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:50.583 [2024-11-06 15:48:17.777369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:50.583 [2024-11-06 15:48:18.079581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:50.583 [2024-11-06 15:48:18.086839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:50.583 [2024-11-06 15:48:18.087132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:50.583 [2024-11-06 15:48:18.087552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:50.583 [2024-11-06 15:48:18.087779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:50.842 [2024-11-06 15:48:18.382434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:50.842 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 Malloc0 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 [2024-11-06 15:48:18.518394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:51.101 test case1: single bdev can't be used in multiple subsystems 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.101 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.102 [2024-11-06 15:48:18.545964] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:51.102 [2024-11-06 15:48:18.545998] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:51.102 [2024-11-06 15:48:18.546010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:51.102 request: 00:43:51.102 { 00:43:51.102 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:51.102 "namespace": { 00:43:51.102 "bdev_name": "Malloc0", 00:43:51.102 "no_auto_visible": false 00:43:51.102 }, 00:43:51.102 "method": "nvmf_subsystem_add_ns", 00:43:51.102 "req_id": 1 00:43:51.102 } 00:43:51.102 Got JSON-RPC error response 00:43:51.102 response: 00:43:51.102 { 00:43:51.102 "code": -32602, 00:43:51.102 "message": "Invalid parameters" 00:43:51.102 } 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:51.102 Adding namespace failed - expected result. 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:51.102 test case2: host connect to nvmf target in multiple paths 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:51.102 [2024-11-06 15:48:18.558066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:51.102 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:51.360 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:51.619 15:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:51.619 15:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:43:51.619 15:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:43:51.619 15:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:43:51.619 15:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:43:53.533 15:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:53.533 [global] 00:43:53.533 thread=1 00:43:53.533 invalidate=1 00:43:53.533 rw=write 00:43:53.533 time_based=1 00:43:53.533 runtime=1 00:43:53.533 ioengine=libaio 00:43:53.533 direct=1 00:43:53.533 bs=4096 00:43:53.533 iodepth=1 00:43:53.533 norandommap=0 00:43:53.533 numjobs=1 00:43:53.533 00:43:53.533 verify_dump=1 00:43:53.533 verify_backlog=512 00:43:53.533 verify_state_save=0 00:43:53.533 do_verify=1 00:43:53.533 verify=crc32c-intel 00:43:53.790 [job0] 00:43:53.790 filename=/dev/nvme0n1 00:43:53.790 Could not set queue depth (nvme0n1) 00:43:54.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:54.049 fio-3.35 00:43:54.049 Starting 1 thread 00:43:54.986 00:43:54.986 job0: (groupid=0, jobs=1): err= 0: pid=4177463: Wed Nov 6 15:48:22 2024 00:43:54.986 read: IOPS=2454, BW=9818KiB/s (10.1MB/s)(9828KiB/1001msec) 00:43:54.986 slat (nsec): min=6429, max=26924, avg=7353.29, stdev=851.01 00:43:54.986 clat (usec): min=204, max=374, avg=224.99, stdev=13.52 00:43:54.986 lat (usec): min=212, max=382, avg=232.34, stdev=13.54 00:43:54.986 clat percentiles (usec): 00:43:54.986 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 212], 20.00th=[ 215], 00:43:54.986 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:43:54.986 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 251], 00:43:54.986 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 297], 99.95th=[ 318], 00:43:54.986 | 99.99th=[ 375] 00:43:54.986 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:43:54.986 slat (nsec): min=9272, max=40560, avg=10260.11, stdev=1307.94 00:43:54.986 clat (usec): min=131, max=384, avg=153.22, stdev= 8.12 00:43:54.986 lat (usec): min=141, max=424, avg=163.48, stdev= 8.55 00:43:54.986 clat percentiles (usec): 00:43:54.986 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:43:54.986 | 30.00th=[ 151], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 153], 00:43:54.986 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 161], 95.00th=[ 163], 00:43:54.986 | 99.00th=[ 172], 99.50th=[ 174], 99.90th=[ 253], 99.95th=[ 269], 00:43:54.986 | 99.99th=[ 383] 00:43:54.986 bw ( KiB/s): min=12184, max=12184, per=100.00%, avg=12184.00, stdev= 0.00, samples=1 00:43:54.986 iops : min= 3046, max= 3046, avg=3046.00, stdev= 0.00, samples=1 00:43:54.986 lat (usec) : 250=96.85%, 500=3.15% 00:43:54.986 cpu : usr=2.50%, sys=4.50%, ctx=5017, majf=0, minf=1 00:43:54.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:54.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:54.986 issued rwts: total=2457,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:54.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:54.986 00:43:54.986 Run status group 0 (all jobs): 00:43:54.986 READ: bw=9818KiB/s (10.1MB/s), 9818KiB/s-9818KiB/s (10.1MB/s-10.1MB/s), io=9828KiB (10.1MB), run=1001-1001msec 00:43:54.986 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:43:54.986 00:43:54.986 Disk stats (read/write): 00:43:54.986 nvme0n1: ios=2098/2508, merge=0/0, ticks=478/373, in_queue=851, util=91.28% 00:43:54.986 15:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:55.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:55.554 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:55.554 rmmod nvme_tcp 00:43:55.554 rmmod nvme_fabrics 00:43:55.814 rmmod nvme_keyring 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 4176643 ']' 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 4176643 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 4176643 ']' 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 4176643 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4176643 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4176643' 00:43:55.814 killing process with pid 4176643 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 4176643 00:43:55.814 15:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 4176643 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:57.192 15:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:59.098 00:43:59.098 real 0m15.290s 00:43:59.098 user 0m27.403s 00:43:59.098 sys 0m6.436s 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:59.098 ************************************ 00:43:59.098 END TEST nvmf_nmic 00:43:59.098 ************************************ 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:59.098 ************************************ 00:43:59.098 START TEST nvmf_fio_target 00:43:59.098 ************************************ 00:43:59.098 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:59.358 * Looking for test storage... 00:43:59.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.358 --rc genhtml_branch_coverage=1 00:43:59.358 --rc genhtml_function_coverage=1 00:43:59.358 --rc genhtml_legend=1 00:43:59.358 --rc geninfo_all_blocks=1 00:43:59.358 --rc geninfo_unexecuted_blocks=1 00:43:59.358 00:43:59.358 ' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.358 --rc genhtml_branch_coverage=1 00:43:59.358 --rc genhtml_function_coverage=1 00:43:59.358 --rc genhtml_legend=1 00:43:59.358 --rc geninfo_all_blocks=1 00:43:59.358 --rc geninfo_unexecuted_blocks=1 00:43:59.358 00:43:59.358 ' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.358 --rc genhtml_branch_coverage=1 00:43:59.358 --rc genhtml_function_coverage=1 00:43:59.358 --rc genhtml_legend=1 00:43:59.358 --rc geninfo_all_blocks=1 00:43:59.358 --rc geninfo_unexecuted_blocks=1 00:43:59.358 00:43:59.358 ' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:59.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.358 --rc genhtml_branch_coverage=1 00:43:59.358 --rc genhtml_function_coverage=1 00:43:59.358 --rc genhtml_legend=1 00:43:59.358 --rc geninfo_all_blocks=1 00:43:59.358 --rc geninfo_unexecuted_blocks=1 00:43:59.358 00:43:59.358 ' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.358 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:59.359 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:05.960 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:44:05.960 Found 0000:86:00.0 (0x8086 - 0x159b) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:44:05.961 Found 0000:86:00.1 (0x8086 - 0x159b) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:44:05.961 Found net devices under 0000:86:00.0: cvl_0_0 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:44:05.961 Found net devices under 0000:86:00.1: cvl_0_1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:05.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:05.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:44:05.961 00:44:05.961 --- 10.0.0.2 ping statistics --- 00:44:05.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:05.961 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:05.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:05.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:44:05.961 00:44:05.961 --- 10.0.0.1 ping statistics --- 00:44:05.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:05.961 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=4181453 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 4181453 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 4181453 ']' 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:05.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:05.961 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:05.962 15:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.962 [2024-11-06 15:48:32.836895] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:05.962 [2024-11-06 15:48:32.838990] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:44:05.962 [2024-11-06 15:48:32.839056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:05.962 [2024-11-06 15:48:32.968428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:05.962 [2024-11-06 15:48:33.077163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:05.962 [2024-11-06 15:48:33.077212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:05.962 [2024-11-06 15:48:33.077225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:05.962 [2024-11-06 15:48:33.077252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:05.962 [2024-11-06 15:48:33.077264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:05.962 [2024-11-06 15:48:33.079672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:05.962 [2024-11-06 15:48:33.079772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:05.962 [2024-11-06 15:48:33.079850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.962 [2024-11-06 15:48:33.079874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:05.962 [2024-11-06 15:48:33.387619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:05.962 [2024-11-06 15:48:33.394955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:05.962 [2024-11-06 15:48:33.395266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:05.962 [2024-11-06 15:48:33.397359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:05.962 [2024-11-06 15:48:33.397900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:06.275 [2024-11-06 15:48:33.844928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:06.275 15:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:06.554 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:44:06.554 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:06.813 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:44:06.813 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:07.379 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:44:07.379 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:07.379 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:44:07.379 15:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:44:07.637 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:07.895 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:44:07.895 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:08.154 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:44:08.154 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:08.413 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:44:08.413 15:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:44:08.672 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:08.931 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:08.931 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:08.931 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:08.931 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:09.190 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:09.449 [2024-11-06 15:48:36.924718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:09.449 15:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:44:09.708 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:44:09.967 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:44:10.225 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:44:12.129 15:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:12.129 [global] 00:44:12.129 thread=1 00:44:12.129 invalidate=1 00:44:12.129 rw=write 00:44:12.129 time_based=1 00:44:12.129 runtime=1 00:44:12.129 ioengine=libaio 00:44:12.129 direct=1 00:44:12.129 bs=4096 00:44:12.129 iodepth=1 00:44:12.129 norandommap=0 00:44:12.129 numjobs=1 00:44:12.129 00:44:12.129 verify_dump=1 00:44:12.129 verify_backlog=512 00:44:12.129 verify_state_save=0 00:44:12.129 do_verify=1 00:44:12.129 verify=crc32c-intel 00:44:12.412 [job0] 00:44:12.412 filename=/dev/nvme0n1 00:44:12.412 [job1] 00:44:12.412 filename=/dev/nvme0n2 00:44:12.412 [job2] 00:44:12.412 filename=/dev/nvme0n3 00:44:12.412 [job3] 00:44:12.412 filename=/dev/nvme0n4 00:44:12.412 Could not set queue depth (nvme0n1) 00:44:12.412 Could not set queue depth (nvme0n2) 00:44:12.412 Could not set queue depth (nvme0n3) 00:44:12.412 Could not set queue depth (nvme0n4) 00:44:12.673 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:12.673 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:12.673 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:12.673 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:12.673 fio-3.35 00:44:12.673 Starting 4 threads 00:44:14.043 00:44:14.043 job0: (groupid=0, jobs=1): err= 0: pid=4182786: Wed Nov 6 15:48:41 2024 00:44:14.043 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:44:14.043 slat (nsec): min=2139, max=42471, avg=7448.61, stdev=3914.47 00:44:14.043 clat (usec): min=195, max=510, avg=247.48, stdev=20.37 00:44:14.043 lat (usec): min=198, max=517, avg=254.92, stdev=20.48 00:44:14.043 clat percentiles (usec): 00:44:14.043 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:44:14.043 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:44:14.043 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:44:14.043 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 449], 99.95th=[ 490], 00:44:14.043 | 99.99th=[ 510] 00:44:14.043 write: IOPS=2439, BW=9758KiB/s (9992kB/s)(9768KiB/1001msec); 0 zone resets 00:44:14.043 slat (nsec): min=3628, max=50223, avg=11702.22, stdev=3020.53 00:44:14.043 clat (usec): min=137, max=1172, avg=178.47, stdev=28.37 00:44:14.043 lat (usec): min=143, max=1176, avg=190.17, stdev=28.91 00:44:14.043 clat percentiles (usec): 00:44:14.043 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 167], 00:44:14.043 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:44:14.043 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:44:14.043 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 515], 99.95th=[ 519], 00:44:14.043 | 99.99th=[ 1172] 00:44:14.043 bw ( KiB/s): min= 9872, max= 9872, per=33.88%, avg=9872.00, stdev= 0.00, samples=1 00:44:14.043 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:44:14.043 lat (usec) : 250=82.43%, 500=17.48%, 750=0.07% 00:44:14.043 lat (msec) : 2=0.02% 00:44:14.043 cpu : usr=2.80%, sys=7.00%, ctx=4492, majf=0, minf=1 00:44:14.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:14.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.043 issued rwts: total=2048,2442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:14.043 job1: (groupid=0, jobs=1): err= 0: pid=4182806: Wed Nov 6 15:48:41 2024 00:44:14.044 read: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec) 00:44:14.044 slat (nsec): min=4715, max=31354, avg=8055.67, stdev=1474.88 00:44:14.044 clat (usec): min=217, max=596, avg=286.87, stdev=53.99 00:44:14.044 lat (usec): min=225, max=603, avg=294.93, stdev=54.22 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:44:14.044 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:44:14.044 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[ 424], 00:44:14.044 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 594], 00:44:14.044 | 99.99th=[ 594] 00:44:14.044 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:44:14.044 slat (nsec): min=4970, max=54372, avg=9921.45, stdev=3088.99 00:44:14.044 clat (usec): min=147, max=746, avg=199.60, stdev=31.90 00:44:14.044 lat (usec): min=157, max=755, avg=209.52, stdev=32.77 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 176], 00:44:14.044 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:44:14.044 | 70.00th=[ 210], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 245], 00:44:14.044 | 99.00th=[ 262], 99.50th=[ 293], 99.90th=[ 420], 99.95th=[ 578], 00:44:14.044 | 99.99th=[ 750] 00:44:14.044 bw ( KiB/s): min= 8192, max= 8192, per=28.11%, avg=8192.00, stdev= 0.00, samples=1 00:44:14.044 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:44:14.044 lat (usec) : 250=55.46%, 500=43.63%, 750=0.91% 00:44:14.044 cpu : usr=3.10%, sys=5.60%, ctx=3947, majf=0, minf=2 00:44:14.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 issued rwts: total=1899,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:14.044 job2: (groupid=0, jobs=1): err= 0: pid=4182815: Wed Nov 6 15:48:41 2024 00:44:14.044 read: IOPS=1968, BW=7872KiB/s (8061kB/s)(7880KiB/1001msec) 00:44:14.044 slat (nsec): min=4856, max=49281, avg=8834.48, stdev=2475.86 00:44:14.044 clat (usec): min=221, max=3364, avg=276.91, stdev=105.56 00:44:14.044 lat (usec): min=229, max=3369, avg=285.74, stdev=105.58 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:44:14.044 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 260], 00:44:14.044 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 478], 00:44:14.044 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 2278], 99.95th=[ 3359], 00:44:14.044 | 99.99th=[ 3359] 00:44:14.044 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:44:14.044 slat (nsec): min=4518, max=61324, avg=11734.29, stdev=4516.65 00:44:14.044 clat (usec): min=151, max=425, avg=195.78, stdev=26.11 00:44:14.044 lat (usec): min=162, max=460, avg=207.51, stdev=27.49 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:44:14.044 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 198], 00:44:14.044 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 241], 00:44:14.044 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 367], 99.95th=[ 400], 00:44:14.044 | 99.99th=[ 424] 00:44:14.044 bw ( KiB/s): min= 8192, max= 8192, per=28.11%, avg=8192.00, stdev= 0.00, samples=1 00:44:14.044 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:44:14.044 lat (usec) : 250=69.24%, 500=30.31%, 750=0.40% 00:44:14.044 lat (msec) : 4=0.05% 00:44:14.044 cpu : usr=3.30%, sys=6.30%, ctx=4018, majf=0, minf=2 00:44:14.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 issued rwts: total=1970,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:14.044 job3: (groupid=0, jobs=1): err= 0: pid=4182816: Wed Nov 6 15:48:41 2024 00:44:14.044 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:14.044 slat (nsec): min=7552, max=26926, avg=9230.17, stdev=2744.41 00:44:14.044 clat (usec): min=234, max=41483, avg=1586.87, stdev=6984.97 00:44:14.044 lat (usec): min=242, max=41499, avg=1596.10, stdev=6987.16 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:44:14.044 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 314], 00:44:14.044 | 70.00th=[ 355], 80.00th=[ 412], 90.00th=[ 502], 95.00th=[ 519], 00:44:14.044 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:44:14.044 | 99.99th=[41681] 00:44:14.044 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:44:14.044 slat (nsec): min=11062, max=36018, avg=12555.95, stdev=2063.86 00:44:14.044 clat (usec): min=170, max=332, avg=222.68, stdev=26.52 00:44:14.044 lat (usec): min=182, max=350, avg=235.24, stdev=26.69 00:44:14.044 clat percentiles (usec): 00:44:14.044 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:44:14.044 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:44:14.044 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 269], 00:44:14.044 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 334], 00:44:14.044 | 99.99th=[ 334] 00:44:14.044 bw ( KiB/s): min= 4096, max= 4096, per=14.06%, avg=4096.00, stdev= 0.00, samples=1 00:44:14.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:14.044 lat (usec) : 250=53.08%, 500=42.34%, 750=3.24% 00:44:14.044 lat (msec) : 10=0.08%, 50=1.26% 00:44:14.044 cpu : usr=0.70%, sys=2.60%, ctx=1268, majf=0, minf=1 00:44:14.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:14.044 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:14.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:14.044 00:44:14.044 Run status group 0 (all jobs): 00:44:14.044 READ: bw=25.1MiB/s (26.3MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=25.1MiB (26.3MB), run=1001-1001msec 00:44:14.044 WRITE: bw=28.5MiB/s (29.8MB/s), 3013KiB/s-9758KiB/s (3085kB/s-9992kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:44:14.044 00:44:14.044 Disk stats (read/write): 00:44:14.044 nvme0n1: ios=1697/2048, merge=0/0, ticks=1358/337, in_queue=1695, util=97.90% 00:44:14.044 nvme0n2: ios=1536/1786, merge=0/0, ticks=406/347, in_queue=753, util=83.57% 00:44:14.044 nvme0n3: ios=1536/1913, merge=0/0, ticks=376/349, in_queue=725, util=87.81% 00:44:14.044 nvme0n4: ios=157/512, merge=0/0, ticks=1633/111, in_queue=1744, util=98.46% 00:44:14.044 15:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:44:14.044 [global] 00:44:14.044 thread=1 00:44:14.044 invalidate=1 00:44:14.044 rw=randwrite 00:44:14.044 time_based=1 00:44:14.044 runtime=1 00:44:14.044 ioengine=libaio 00:44:14.044 direct=1 00:44:14.044 bs=4096 00:44:14.044 iodepth=1 00:44:14.044 norandommap=0 00:44:14.044 numjobs=1 00:44:14.044 00:44:14.044 verify_dump=1 00:44:14.044 verify_backlog=512 00:44:14.044 verify_state_save=0 00:44:14.044 do_verify=1 00:44:14.044 verify=crc32c-intel 00:44:14.044 [job0] 00:44:14.044 filename=/dev/nvme0n1 00:44:14.044 [job1] 00:44:14.044 filename=/dev/nvme0n2 00:44:14.044 [job2] 00:44:14.044 filename=/dev/nvme0n3 00:44:14.044 [job3] 00:44:14.044 filename=/dev/nvme0n4 00:44:14.044 Could not set queue depth (nvme0n1) 00:44:14.044 Could not set queue depth (nvme0n2) 00:44:14.044 Could not set queue depth (nvme0n3) 00:44:14.044 Could not set queue depth (nvme0n4) 00:44:14.044 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:14.044 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:14.044 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:14.044 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:14.044 fio-3.35 00:44:14.044 Starting 4 threads 00:44:15.414 00:44:15.414 job0: (groupid=0, jobs=1): err= 0: pid=4183180: Wed Nov 6 15:48:42 2024 00:44:15.414 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:44:15.414 slat (nsec): min=9618, max=26388, avg=20523.32, stdev=4952.73 00:44:15.414 clat (usec): min=40789, max=42003, avg=41010.60, stdev=236.32 00:44:15.414 lat (usec): min=40803, max=42025, avg=41031.13, stdev=236.88 00:44:15.414 clat percentiles (usec): 00:44:15.414 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:44:15.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:15.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:15.414 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:15.414 | 99.99th=[42206] 00:44:15.414 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:44:15.414 slat (nsec): min=9254, max=47830, avg=12232.61, stdev=3603.92 00:44:15.414 clat (usec): min=162, max=651, avg=225.20, stdev=40.00 00:44:15.414 lat (usec): min=174, max=664, avg=237.43, stdev=40.57 00:44:15.414 clat percentiles (usec): 00:44:15.414 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 206], 00:44:15.414 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:44:15.414 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 262], 00:44:15.414 | 99.00th=[ 310], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 652], 00:44:15.414 | 99.99th=[ 652] 00:44:15.414 bw ( KiB/s): min= 4087, max= 4087, per=27.85%, avg=4087.00, stdev= 0.00, samples=1 00:44:15.414 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:44:15.414 lat (usec) : 250=88.20%, 500=6.93%, 750=0.75% 00:44:15.414 lat (msec) : 50=4.12% 00:44:15.414 cpu : usr=0.39%, sys=0.39%, ctx=535, majf=0, minf=1 00:44:15.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:15.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.414 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:15.414 job1: (groupid=0, jobs=1): err= 0: pid=4183182: Wed Nov 6 15:48:42 2024 00:44:15.414 read: IOPS=583, BW=2334KiB/s (2390kB/s)(2336KiB/1001msec) 00:44:15.414 slat (nsec): min=6377, max=36846, avg=7795.69, stdev=2857.05 00:44:15.414 clat (usec): min=203, max=41187, avg=1389.75, stdev=6750.18 00:44:15.414 lat (usec): min=210, max=41196, avg=1397.55, stdev=6751.86 00:44:15.414 clat percentiles (usec): 00:44:15.414 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 212], 20.00th=[ 215], 00:44:15.414 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:44:15.414 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 297], 00:44:15.414 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:44:15.414 | 99.99th=[41157] 00:44:15.414 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:44:15.414 slat (nsec): min=9157, max=42873, avg=10258.27, stdev=1848.73 00:44:15.414 clat (usec): min=130, max=314, avg=166.07, stdev=20.04 00:44:15.414 lat (usec): min=148, max=357, avg=176.32, stdev=20.43 00:44:15.415 clat percentiles (usec): 00:44:15.415 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:44:15.415 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 172], 60.00th=[ 178], 00:44:15.415 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 194], 00:44:15.415 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 293], 99.95th=[ 314], 00:44:15.415 | 99.99th=[ 314] 00:44:15.415 bw ( KiB/s): min= 4096, max= 4096, per=27.91%, avg=4096.00, stdev= 0.00, samples=1 00:44:15.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:15.415 lat (usec) : 250=96.89%, 500=1.99%, 750=0.06% 00:44:15.415 lat (msec) : 50=1.06% 00:44:15.415 cpu : usr=1.00%, sys=1.30%, ctx=1609, majf=0, minf=2 00:44:15.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:15.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 issued rwts: total=584,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:15.415 job2: (groupid=0, jobs=1): err= 0: pid=4183183: Wed Nov 6 15:48:42 2024 00:44:15.415 read: IOPS=521, BW=2085KiB/s (2135kB/s)(2112KiB/1013msec) 00:44:15.415 slat (nsec): min=6857, max=26120, avg=8042.62, stdev=1978.35 00:44:15.415 clat (usec): min=213, max=41987, avg=1499.41, stdev=7002.51 00:44:15.415 lat (usec): min=221, max=42000, avg=1507.45, stdev=7003.86 00:44:15.415 clat percentiles (usec): 00:44:15.415 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:44:15.415 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:44:15.415 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 310], 95.00th=[ 437], 00:44:15.415 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:44:15.415 | 99.99th=[42206] 00:44:15.415 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:44:15.415 slat (nsec): min=9507, max=38354, avg=11238.91, stdev=2476.85 00:44:15.415 clat (usec): min=149, max=439, avg=196.84, stdev=31.12 00:44:15.415 lat (usec): min=161, max=477, avg=208.08, stdev=31.53 00:44:15.415 clat percentiles (usec): 00:44:15.415 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:44:15.415 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:44:15.415 | 70.00th=[ 200], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 247], 00:44:15.415 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 375], 99.95th=[ 441], 00:44:15.415 | 99.99th=[ 441] 00:44:15.415 bw ( KiB/s): min= 8192, max= 8192, per=55.82%, avg=8192.00, stdev= 0.00, samples=1 00:44:15.415 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:44:15.415 lat (usec) : 250=80.73%, 500=18.11%, 750=0.13% 00:44:15.415 lat (msec) : 50=1.03% 00:44:15.415 cpu : usr=0.99%, sys=1.28%, ctx=1553, majf=0, minf=1 00:44:15.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:15.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:15.415 job3: (groupid=0, jobs=1): err= 0: pid=4183184: Wed Nov 6 15:48:42 2024 00:44:15.415 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:44:15.415 slat (nsec): min=6781, max=32974, avg=7914.16, stdev=2409.60 00:44:15.415 clat (usec): min=210, max=41672, avg=718.98, stdev=4259.23 00:44:15.415 lat (usec): min=218, max=41682, avg=726.90, stdev=4259.88 00:44:15.415 clat percentiles (usec): 00:44:15.415 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 249], 00:44:15.415 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 260], 00:44:15.415 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 302], 00:44:15.415 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:44:15.415 | 99.99th=[41681] 00:44:15.415 write: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec); 0 zone resets 00:44:15.415 slat (nsec): min=9379, max=48521, avg=10543.76, stdev=1552.30 00:44:15.415 clat (usec): min=147, max=458, avg=197.96, stdev=29.09 00:44:15.415 lat (usec): min=157, max=507, avg=208.50, stdev=29.44 00:44:15.415 clat percentiles (usec): 00:44:15.415 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:44:15.415 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 200], 00:44:15.415 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 245], 00:44:15.415 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 322], 99.95th=[ 457], 00:44:15.415 | 99.99th=[ 457] 00:44:15.415 bw ( KiB/s): min= 8192, max= 8192, per=55.82%, avg=8192.00, stdev= 0.00, samples=1 00:44:15.415 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:44:15.415 lat (usec) : 250=62.84%, 500=36.45%, 750=0.13%, 1000=0.04% 00:44:15.415 lat (msec) : 20=0.04%, 50=0.49% 00:44:15.415 cpu : usr=1.30%, sys=1.90%, ctx=2229, majf=0, minf=2 00:44:15.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:15.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.415 issued rwts: total=1024,1204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:15.415 00:44:15.415 Run status group 0 (all jobs): 00:44:15.415 READ: bw=8413KiB/s (8615kB/s), 85.8KiB/s-4092KiB/s (87.8kB/s-4190kB/s), io=8632KiB (8839kB), run=1001-1026msec 00:44:15.415 WRITE: bw=14.3MiB/s (15.0MB/s), 1996KiB/s-4811KiB/s (2044kB/s-4927kB/s), io=14.7MiB (15.4MB), run=1001-1026msec 00:44:15.415 00:44:15.415 Disk stats (read/write): 00:44:15.415 nvme0n1: ios=39/512, merge=0/0, ticks=997/116, in_queue=1113, util=96.49% 00:44:15.415 nvme0n2: ios=584/1024, merge=0/0, ticks=1635/171, in_queue=1806, util=97.44% 00:44:15.415 nvme0n3: ios=556/1024, merge=0/0, ticks=852/198, in_queue=1050, util=98.11% 00:44:15.415 nvme0n4: ios=628/1024, merge=0/0, ticks=1344/201, in_queue=1545, util=97.25% 00:44:15.415 15:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:44:15.415 [global] 00:44:15.415 thread=1 00:44:15.415 invalidate=1 00:44:15.415 rw=write 00:44:15.415 time_based=1 00:44:15.415 runtime=1 00:44:15.415 ioengine=libaio 00:44:15.415 direct=1 00:44:15.415 bs=4096 00:44:15.415 iodepth=128 00:44:15.415 norandommap=0 00:44:15.415 numjobs=1 00:44:15.415 00:44:15.415 verify_dump=1 00:44:15.415 verify_backlog=512 00:44:15.415 verify_state_save=0 00:44:15.415 do_verify=1 00:44:15.415 verify=crc32c-intel 00:44:15.415 [job0] 00:44:15.415 filename=/dev/nvme0n1 00:44:15.415 [job1] 00:44:15.415 filename=/dev/nvme0n2 00:44:15.415 [job2] 00:44:15.415 filename=/dev/nvme0n3 00:44:15.415 [job3] 00:44:15.415 filename=/dev/nvme0n4 00:44:15.415 Could not set queue depth (nvme0n1) 00:44:15.415 Could not set queue depth (nvme0n2) 00:44:15.415 Could not set queue depth (nvme0n3) 00:44:15.415 Could not set queue depth (nvme0n4) 00:44:15.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:15.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:15.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:15.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:15.672 fio-3.35 00:44:15.672 Starting 4 threads 00:44:17.044 00:44:17.044 job0: (groupid=0, jobs=1): err= 0: pid=4183561: Wed Nov 6 15:48:44 2024 00:44:17.044 read: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1004msec) 00:44:17.044 slat (nsec): min=1108, max=25668k, avg=141226.30, stdev=1098099.41 00:44:17.044 clat (usec): min=2360, max=50037, avg=17326.97, stdev=5992.53 00:44:17.044 lat (usec): min=4470, max=50060, avg=17468.20, stdev=6097.93 00:44:17.044 clat percentiles (usec): 00:44:17.044 | 1.00th=[ 8356], 5.00th=[10814], 10.00th=[13042], 20.00th=[13435], 00:44:17.044 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[16188], 00:44:17.044 | 70.00th=[19268], 80.00th=[23200], 90.00th=[26608], 95.00th=[28967], 00:44:17.044 | 99.00th=[34341], 99.50th=[35390], 99.90th=[46400], 99.95th=[46400], 00:44:17.045 | 99.99th=[50070] 00:44:17.045 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:44:17.045 slat (usec): min=2, max=14289, avg=179.43, stdev=863.05 00:44:17.045 clat (usec): min=967, max=68302, avg=24508.27, stdev=14333.49 00:44:17.045 lat (usec): min=976, max=68306, avg=24687.70, stdev=14409.36 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 3032], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[16057], 00:44:17.045 | 30.00th=[18744], 40.00th=[20579], 50.00th=[21365], 60.00th=[22676], 00:44:17.045 | 70.00th=[23200], 80.00th=[25560], 90.00th=[51119], 95.00th=[63177], 00:44:17.045 | 99.00th=[64226], 99.50th=[65799], 99.90th=[68682], 99.95th=[68682], 00:44:17.045 | 99.99th=[68682] 00:44:17.045 bw ( KiB/s): min=10504, max=14072, per=18.33%, avg=12288.00, stdev=2522.96, samples=2 00:44:17.045 iops : min= 2626, max= 3518, avg=3072.00, stdev=630.74, samples=2 00:44:17.045 lat (usec) : 1000=0.05% 00:44:17.045 lat (msec) : 2=0.16%, 4=0.44%, 10=6.32%, 20=46.14%, 50=41.43% 00:44:17.045 lat (msec) : 100=5.45% 00:44:17.045 cpu : usr=1.99%, sys=3.49%, ctx=381, majf=0, minf=1 00:44:17.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:44:17.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:17.045 issued rwts: total=3001,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:17.045 job1: (groupid=0, jobs=1): err= 0: pid=4183562: Wed Nov 6 15:48:44 2024 00:44:17.045 read: IOPS=4749, BW=18.6MiB/s (19.5MB/s)(19.5MiB/1050msec) 00:44:17.045 slat (nsec): min=1517, max=15985k, avg=103020.53, stdev=867883.42 00:44:17.045 clat (usec): min=2813, max=58500, avg=13720.44, stdev=7821.70 00:44:17.045 lat (usec): min=2820, max=59242, avg=13823.46, stdev=7875.23 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 6718], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9372], 00:44:17.045 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[11469], 00:44:17.045 | 70.00th=[14877], 80.00th=[16909], 90.00th=[21627], 95.00th=[27919], 00:44:17.045 | 99.00th=[50594], 99.50th=[50594], 99.90th=[58459], 99.95th=[58459], 00:44:17.045 | 99.99th=[58459] 00:44:17.045 write: IOPS=4876, BW=19.0MiB/s (20.0MB/s)(20.0MiB/1050msec); 0 zone resets 00:44:17.045 slat (usec): min=2, max=19938, avg=90.89, stdev=693.85 00:44:17.045 clat (usec): min=1816, max=45283, avg=12563.25, stdev=5898.62 00:44:17.045 lat (usec): min=1829, max=45318, avg=12654.14, stdev=5953.04 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 3785], 5.00th=[ 6521], 10.00th=[ 7570], 20.00th=[ 8848], 00:44:17.045 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[10814], 00:44:17.045 | 70.00th=[13173], 80.00th=[17695], 90.00th=[21365], 95.00th=[23462], 00:44:17.045 | 99.00th=[30802], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:44:17.045 | 99.99th=[45351] 00:44:17.045 bw ( KiB/s): min=16384, max=24576, per=30.55%, avg=20480.00, stdev=5792.62, samples=2 00:44:17.045 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:44:17.045 lat (msec) : 2=0.04%, 4=0.77%, 10=38.42%, 20=47.87%, 50=11.95% 00:44:17.045 lat (msec) : 100=0.95% 00:44:17.045 cpu : usr=3.24%, sys=5.82%, ctx=396, majf=0, minf=1 00:44:17.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:44:17.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:17.045 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:17.045 job2: (groupid=0, jobs=1): err= 0: pid=4183563: Wed Nov 6 15:48:44 2024 00:44:17.045 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:44:17.045 slat (nsec): min=1369, max=12042k, avg=117374.22, stdev=831772.35 00:44:17.045 clat (usec): min=3232, max=33452, avg=14028.04, stdev=4701.69 00:44:17.045 lat (usec): min=3242, max=33462, avg=14145.41, stdev=4757.95 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 5538], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11207], 00:44:17.045 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12649], 60.00th=[12911], 00:44:17.045 | 70.00th=[14091], 80.00th=[16909], 90.00th=[20579], 95.00th=[25035], 00:44:17.045 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:44:17.045 | 99.99th=[33424] 00:44:17.045 write: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1012msec); 0 zone resets 00:44:17.045 slat (usec): min=2, max=11114, avg=140.72, stdev=656.61 00:44:17.045 clat (usec): min=2609, max=60818, avg=19500.33, stdev=11582.40 00:44:17.045 lat (usec): min=2615, max=60822, avg=19641.05, stdev=11655.91 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 3654], 5.00th=[ 7242], 10.00th=[ 8848], 20.00th=[10814], 00:44:17.045 | 30.00th=[12387], 40.00th=[12649], 50.00th=[15139], 60.00th=[20579], 00:44:17.045 | 70.00th=[22938], 80.00th=[24249], 90.00th=[38536], 95.00th=[44827], 00:44:17.045 | 99.00th=[53216], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:44:17.045 | 99.99th=[61080] 00:44:17.045 bw ( KiB/s): min=11336, max=19440, per=22.96%, avg=15388.00, stdev=5730.39, samples=2 00:44:17.045 iops : min= 2834, max= 4860, avg=3847.00, stdev=1432.60, samples=2 00:44:17.045 lat (msec) : 4=0.57%, 10=10.04%, 20=63.05%, 50=25.06%, 100=1.28% 00:44:17.045 cpu : usr=3.56%, sys=3.46%, ctx=446, majf=0, minf=1 00:44:17.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:44:17.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:17.045 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:17.045 job3: (groupid=0, jobs=1): err= 0: pid=4183564: Wed Nov 6 15:48:44 2024 00:44:17.045 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:44:17.045 slat (nsec): min=1283, max=11253k, avg=100655.55, stdev=816466.39 00:44:17.045 clat (usec): min=5760, max=29431, avg=12642.94, stdev=3494.11 00:44:17.045 lat (usec): min=5770, max=30151, avg=12743.60, stdev=3575.25 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[ 9372], 20.00th=[10552], 00:44:17.045 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:44:17.045 | 70.00th=[12518], 80.00th=[15270], 90.00th=[17957], 95.00th=[19792], 00:44:17.045 | 99.00th=[22414], 99.50th=[27132], 99.90th=[29492], 99.95th=[29492], 00:44:17.045 | 99.99th=[29492] 00:44:17.045 write: IOPS=5369, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1011msec); 0 zone resets 00:44:17.045 slat (usec): min=2, max=9928, avg=83.78, stdev=516.24 00:44:17.045 clat (usec): min=1440, max=30156, avg=11709.59, stdev=3535.17 00:44:17.045 lat (usec): min=1453, max=30160, avg=11793.38, stdev=3561.06 00:44:17.045 clat percentiles (usec): 00:44:17.045 | 1.00th=[ 3752], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8979], 00:44:17.045 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:44:17.045 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16450], 95.00th=[19268], 00:44:17.045 | 99.00th=[21365], 99.50th=[22152], 99.90th=[30278], 99.95th=[30278], 00:44:17.045 | 99.99th=[30278] 00:44:17.045 bw ( KiB/s): min=20480, max=21936, per=31.64%, avg=21208.00, stdev=1029.55, samples=2 00:44:17.045 iops : min= 5120, max= 5484, avg=5302.00, stdev=257.39, samples=2 00:44:17.045 lat (msec) : 2=0.16%, 4=0.43%, 10=22.22%, 20=72.76%, 50=4.44% 00:44:17.045 cpu : usr=4.16%, sys=5.54%, ctx=470, majf=0, minf=2 00:44:17.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:44:17.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:17.045 issued rwts: total=5120,5429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:17.045 00:44:17.045 Run status group 0 (all jobs): 00:44:17.045 READ: bw=62.1MiB/s (65.1MB/s), 11.7MiB/s-19.8MiB/s (12.2MB/s-20.7MB/s), io=65.2MiB (68.4MB), run=1004-1050msec 00:44:17.045 WRITE: bw=65.5MiB/s (68.6MB/s), 12.0MiB/s-21.0MiB/s (12.5MB/s-22.0MB/s), io=68.7MiB (72.1MB), run=1004-1050msec 00:44:17.045 00:44:17.045 Disk stats (read/write): 00:44:17.045 nvme0n1: ios=2585/2743, merge=0/0, ticks=33208/41155, in_queue=74363, util=98.10% 00:44:17.045 nvme0n2: ios=4141/4103, merge=0/0, ticks=51982/47539, in_queue=99521, util=98.48% 00:44:17.045 nvme0n3: ios=3130/3479, merge=0/0, ticks=41892/60539, in_queue=102431, util=98.44% 00:44:17.045 nvme0n4: ios=4249/4608, merge=0/0, ticks=52551/52891, in_queue=105442, util=97.90% 00:44:17.045 15:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:44:17.045 [global] 00:44:17.045 thread=1 00:44:17.045 invalidate=1 00:44:17.045 rw=randwrite 00:44:17.045 time_based=1 00:44:17.045 runtime=1 00:44:17.045 ioengine=libaio 00:44:17.045 direct=1 00:44:17.045 bs=4096 00:44:17.045 iodepth=128 00:44:17.045 norandommap=0 00:44:17.045 numjobs=1 00:44:17.045 00:44:17.045 verify_dump=1 00:44:17.045 verify_backlog=512 00:44:17.045 verify_state_save=0 00:44:17.045 do_verify=1 00:44:17.045 verify=crc32c-intel 00:44:17.045 [job0] 00:44:17.045 filename=/dev/nvme0n1 00:44:17.045 [job1] 00:44:17.045 filename=/dev/nvme0n2 00:44:17.045 [job2] 00:44:17.045 filename=/dev/nvme0n3 00:44:17.045 [job3] 00:44:17.045 filename=/dev/nvme0n4 00:44:17.045 Could not set queue depth (nvme0n1) 00:44:17.045 Could not set queue depth (nvme0n2) 00:44:17.045 Could not set queue depth (nvme0n3) 00:44:17.045 Could not set queue depth (nvme0n4) 00:44:17.303 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:17.303 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:17.303 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:17.303 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:17.303 fio-3.35 00:44:17.303 Starting 4 threads 00:44:18.673 00:44:18.673 job0: (groupid=0, jobs=1): err= 0: pid=4183926: Wed Nov 6 15:48:46 2024 00:44:18.673 read: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1005msec) 00:44:18.673 slat (nsec): min=1298, max=15956k, avg=99232.15, stdev=802257.22 00:44:18.673 clat (usec): min=4016, max=38633, avg=12440.59, stdev=5263.20 00:44:18.673 lat (usec): min=4907, max=38650, avg=12539.82, stdev=5316.72 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 5276], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9241], 00:44:18.673 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11076], 00:44:18.673 | 70.00th=[13566], 80.00th=[15139], 90.00th=[17695], 95.00th=[26608], 00:44:18.673 | 99.00th=[31589], 99.50th=[32375], 99.90th=[38536], 99.95th=[38536], 00:44:18.673 | 99.99th=[38536] 00:44:18.673 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:44:18.673 slat (usec): min=2, max=15107, avg=77.30, stdev=641.15 00:44:18.673 clat (usec): min=651, max=38556, avg=11136.07, stdev=4708.15 00:44:18.673 lat (usec): min=661, max=38563, avg=11213.37, stdev=4751.78 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 7767], 00:44:18.673 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:44:18.673 | 70.00th=[11076], 80.00th=[13960], 90.00th=[17433], 95.00th=[19792], 00:44:18.673 | 99.00th=[30016], 99.50th=[32637], 99.90th=[35914], 99.95th=[35914], 00:44:18.673 | 99.99th=[38536] 00:44:18.673 bw ( KiB/s): min=18768, max=25872, per=31.56%, avg=22320.00, stdev=5023.29, samples=2 00:44:18.673 iops : min= 4692, max= 6468, avg=5580.00, stdev=1255.82, samples=2 00:44:18.673 lat (usec) : 750=0.03% 00:44:18.673 lat (msec) : 2=0.02%, 4=0.17%, 10=49.49%, 20=44.68%, 50=5.62% 00:44:18.673 cpu : usr=4.48%, sys=6.67%, ctx=267, majf=0, minf=1 00:44:18.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:44:18.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:18.673 issued rwts: total=5195,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:18.673 job1: (groupid=0, jobs=1): err= 0: pid=4183927: Wed Nov 6 15:48:46 2024 00:44:18.673 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:44:18.673 slat (usec): min=2, max=14830, avg=146.17, stdev=961.41 00:44:18.673 clat (usec): min=6145, max=37016, avg=18952.41, stdev=5156.79 00:44:18.673 lat (usec): min=6153, max=41745, avg=19098.58, stdev=5239.46 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10683], 20.00th=[14746], 00:44:18.673 | 30.00th=[16057], 40.00th=[18744], 50.00th=[19530], 60.00th=[20579], 00:44:18.673 | 70.00th=[21890], 80.00th=[22938], 90.00th=[24511], 95.00th=[26608], 00:44:18.673 | 99.00th=[30802], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:44:18.673 | 99.99th=[36963] 00:44:18.673 write: IOPS=3675, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1002msec); 0 zone resets 00:44:18.673 slat (usec): min=2, max=20391, avg=117.92, stdev=861.60 00:44:18.673 clat (usec): min=630, max=34870, avg=15972.45, stdev=5194.37 00:44:18.673 lat (usec): min=1240, max=34872, avg=16090.37, stdev=5252.19 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 4424], 5.00th=[ 5669], 10.00th=[ 8029], 20.00th=[12125], 00:44:18.673 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16450], 60.00th=[16581], 00:44:18.673 | 70.00th=[17695], 80.00th=[19006], 90.00th=[21627], 95.00th=[25035], 00:44:18.673 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32637], 99.95th=[32637], 00:44:18.673 | 99.99th=[34866] 00:44:18.673 bw ( KiB/s): min=12288, max=16384, per=20.27%, avg=14336.00, stdev=2896.31, samples=2 00:44:18.673 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:44:18.673 lat (usec) : 750=0.01% 00:44:18.673 lat (msec) : 2=0.14%, 4=0.15%, 10=7.75%, 20=61.28%, 50=30.67% 00:44:18.673 cpu : usr=2.70%, sys=4.40%, ctx=199, majf=0, minf=1 00:44:18.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:44:18.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:18.673 issued rwts: total=3584,3683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:18.673 job2: (groupid=0, jobs=1): err= 0: pid=4183930: Wed Nov 6 15:48:46 2024 00:44:18.673 read: IOPS=4372, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:44:18.673 slat (nsec): min=1294, max=9251.5k, avg=114457.64, stdev=604640.10 00:44:18.673 clat (usec): min=429, max=34871, avg=14392.30, stdev=5368.87 00:44:18.673 lat (usec): min=1693, max=39797, avg=14506.76, stdev=5393.62 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 5014], 5.00th=[10028], 10.00th=[11076], 20.00th=[11600], 00:44:18.673 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[13042], 00:44:18.673 | 70.00th=[13435], 80.00th=[14746], 90.00th=[24773], 95.00th=[28181], 00:44:18.673 | 99.00th=[30540], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:44:18.673 | 99.99th=[34866] 00:44:18.673 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:44:18.673 slat (usec): min=2, max=4362, avg=103.74, stdev=432.66 00:44:18.673 clat (usec): min=8346, max=40736, avg=13813.54, stdev=5279.97 00:44:18.673 lat (usec): min=8358, max=40770, avg=13917.29, stdev=5310.98 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[ 9634], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:44:18.673 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:44:18.673 | 70.00th=[12911], 80.00th=[13829], 90.00th=[14353], 95.00th=[22152], 00:44:18.673 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:44:18.673 | 99.99th=[40633] 00:44:18.673 bw ( KiB/s): min=16384, max=20480, per=26.06%, avg=18432.00, stdev=2896.31, samples=2 00:44:18.673 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:44:18.673 lat (usec) : 500=0.01% 00:44:18.673 lat (msec) : 2=0.13%, 4=0.22%, 10=3.29%, 20=86.59%, 50=9.75% 00:44:18.673 cpu : usr=3.30%, sys=3.70%, ctx=635, majf=0, minf=1 00:44:18.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:44:18.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:18.673 issued rwts: total=4381,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:18.673 job3: (groupid=0, jobs=1): err= 0: pid=4183931: Wed Nov 6 15:48:46 2024 00:44:18.673 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:44:18.673 slat (nsec): min=1460, max=9456.5k, avg=137640.62, stdev=777861.14 00:44:18.673 clat (usec): min=9188, max=35113, avg=17961.35, stdev=7482.65 00:44:18.673 lat (usec): min=9864, max=35122, avg=18098.99, stdev=7513.56 00:44:18.673 clat percentiles (usec): 00:44:18.673 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:44:18.673 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[14484], 00:44:18.673 | 70.00th=[22676], 80.00th=[26084], 90.00th=[31589], 95.00th=[32637], 00:44:18.673 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:44:18.673 | 99.99th=[34866] 00:44:18.673 write: IOPS=3840, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:44:18.674 slat (usec): min=2, max=6434, avg=124.49, stdev=660.45 00:44:18.674 clat (usec): min=565, max=33767, avg=16048.40, stdev=5015.91 00:44:18.674 lat (usec): min=6467, max=33780, avg=16172.89, stdev=5003.69 00:44:18.674 clat percentiles (usec): 00:44:18.674 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[11994], 20.00th=[12256], 00:44:18.674 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[16057], 00:44:18.674 | 70.00th=[19268], 80.00th=[21365], 90.00th=[22676], 95.00th=[25297], 00:44:18.674 | 99.00th=[30278], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:44:18.674 | 99.99th=[33817] 00:44:18.674 bw ( KiB/s): min= 9288, max=20480, per=21.04%, avg=14884.00, stdev=7913.94, samples=2 00:44:18.674 iops : min= 2322, max= 5120, avg=3721.00, stdev=1978.48, samples=2 00:44:18.674 lat (usec) : 750=0.01% 00:44:18.674 lat (msec) : 10=2.14%, 20=67.10%, 50=30.75% 00:44:18.674 cpu : usr=3.00%, sys=5.89%, ctx=288, majf=0, minf=1 00:44:18.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:44:18.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:18.674 issued rwts: total=3584,3848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:18.674 00:44:18.674 Run status group 0 (all jobs): 00:44:18.674 READ: bw=65.1MiB/s (68.2MB/s), 14.0MiB/s-20.2MiB/s (14.7MB/s-21.2MB/s), io=65.4MiB (68.6MB), run=1002-1005msec 00:44:18.674 WRITE: bw=69.1MiB/s (72.4MB/s), 14.4MiB/s-21.9MiB/s (15.1MB/s-23.0MB/s), io=69.4MiB (72.8MB), run=1002-1005msec 00:44:18.674 00:44:18.674 Disk stats (read/write): 00:44:18.674 nvme0n1: ios=4236/4608, merge=0/0, ticks=54093/50927, in_queue=105020, util=98.20% 00:44:18.674 nvme0n2: ios=3122/3199, merge=0/0, ticks=30992/31358, in_queue=62350, util=98.48% 00:44:18.674 nvme0n3: ios=3584/3848, merge=0/0, ticks=14228/13711, in_queue=27939, util=88.96% 00:44:18.674 nvme0n4: ios=3162/3584, merge=0/0, ticks=13187/13325, in_queue=26512, util=98.43% 00:44:18.674 15:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:44:18.674 15:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4184135 00:44:18.674 15:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:44:18.674 15:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:44:18.674 [global] 00:44:18.674 thread=1 00:44:18.674 invalidate=1 00:44:18.674 rw=read 00:44:18.674 time_based=1 00:44:18.674 runtime=10 00:44:18.674 ioengine=libaio 00:44:18.674 direct=1 00:44:18.674 bs=4096 00:44:18.674 iodepth=1 00:44:18.674 norandommap=1 00:44:18.674 numjobs=1 00:44:18.674 00:44:18.674 [job0] 00:44:18.674 filename=/dev/nvme0n1 00:44:18.674 [job1] 00:44:18.674 filename=/dev/nvme0n2 00:44:18.674 [job2] 00:44:18.674 filename=/dev/nvme0n3 00:44:18.674 [job3] 00:44:18.674 filename=/dev/nvme0n4 00:44:18.674 Could not set queue depth (nvme0n1) 00:44:18.674 Could not set queue depth (nvme0n2) 00:44:18.674 Could not set queue depth (nvme0n3) 00:44:18.674 Could not set queue depth (nvme0n4) 00:44:18.930 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:18.930 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:18.930 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:18.930 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:18.930 fio-3.35 00:44:18.930 Starting 4 threads 00:44:22.204 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:22.204 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=31969280, buflen=4096 00:44:22.204 fio: pid=4184308, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:22.204 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:22.204 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:22.204 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:22.204 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=323584, buflen=4096 00:44:22.205 fio: pid=4184307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:22.205 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40034304, buflen=4096 00:44:22.205 fio: pid=4184305, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:22.205 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:22.205 15:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:22.462 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43139072, buflen=4096 00:44:22.462 fio: pid=4184306, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:22.462 00:44:22.462 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4184305: Wed Nov 6 15:48:50 2024 00:44:22.462 read: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(38.2MiB/3153msec) 00:44:22.462 slat (usec): min=6, max=20957, avg=11.25, stdev=242.37 00:44:22.462 clat (usec): min=178, max=1690, avg=300.53, stdev=43.78 00:44:22.462 lat (usec): min=186, max=21303, avg=311.79, stdev=246.81 00:44:22.462 clat percentiles (usec): 00:44:22.462 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 237], 20.00th=[ 277], 00:44:22.462 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 306], 60.00th=[ 310], 00:44:22.462 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 355], 00:44:22.462 | 99.00th=[ 420], 99.50th=[ 486], 99.90th=[ 529], 99.95th=[ 578], 00:44:22.462 | 99.99th=[ 1696] 00:44:22.462 bw ( KiB/s): min=11944, max=14196, per=37.77%, avg=12740.67, stdev=762.73, samples=6 00:44:22.462 iops : min= 2986, max= 3549, avg=3185.17, stdev=190.68, samples=6 00:44:22.462 lat (usec) : 250=14.66%, 500=84.97%, 750=0.34% 00:44:22.462 lat (msec) : 2=0.02% 00:44:22.462 cpu : usr=1.97%, sys=4.66%, ctx=9777, majf=0, minf=1 00:44:22.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 issued rwts: total=9775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:22.462 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4184306: Wed Nov 6 15:48:50 2024 00:44:22.462 read: IOPS=3150, BW=12.3MiB/s (12.9MB/s)(41.1MiB/3343msec) 00:44:22.462 slat (usec): min=3, max=27035, avg=14.51, stdev=335.35 00:44:22.462 clat (usec): min=191, max=65857, avg=300.16, stdev=641.97 00:44:22.462 lat (usec): min=200, max=65865, avg=314.67, stdev=725.22 00:44:22.462 clat percentiles (usec): 00:44:22.462 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 247], 00:44:22.462 | 30.00th=[ 277], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:44:22.462 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 347], 00:44:22.462 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 1598], 99.95th=[ 1680], 00:44:22.462 | 99.99th=[ 2114] 00:44:22.462 bw ( KiB/s): min=12008, max=14490, per=38.27%, avg=12907.00, stdev=875.45, samples=6 00:44:22.462 iops : min= 3002, max= 3622, avg=3226.67, stdev=218.68, samples=6 00:44:22.462 lat (usec) : 250=22.77%, 500=77.02%, 750=0.09% 00:44:22.462 lat (msec) : 2=0.09%, 4=0.02%, 100=0.01% 00:44:22.462 cpu : usr=1.26%, sys=5.48%, ctx=10541, majf=0, minf=2 00:44:22.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 issued rwts: total=10533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:22.462 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4184307: Wed Nov 6 15:48:50 2024 00:44:22.462 read: IOPS=27, BW=109KiB/s (111kB/s)(316KiB/2912msec) 00:44:22.462 slat (nsec): min=7600, max=75950, avg=21754.36, stdev=7992.36 00:44:22.462 clat (usec): min=246, max=42128, avg=36563.38, stdev=13095.23 00:44:22.462 lat (usec): min=254, max=42141, avg=36585.07, stdev=13096.20 00:44:22.462 clat percentiles (usec): 00:44:22.462 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[41157], 00:44:22.462 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:22.462 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:44:22.462 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:22.462 | 99.99th=[42206] 00:44:22.462 bw ( KiB/s): min= 96, max= 152, per=0.32%, avg=108.80, stdev=24.40, samples=5 00:44:22.462 iops : min= 24, max= 38, avg=27.20, stdev= 6.10, samples=5 00:44:22.462 lat (usec) : 250=1.25%, 500=8.75%, 750=1.25% 00:44:22.462 lat (msec) : 50=87.50% 00:44:22.462 cpu : usr=0.10%, sys=0.00%, ctx=81, majf=0, minf=2 00:44:22.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.462 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:22.462 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=4184308: Wed Nov 6 15:48:50 2024 00:44:22.462 read: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(30.5MiB/2680msec) 00:44:22.462 slat (nsec): min=6926, max=41308, avg=8282.23, stdev=1443.58 00:44:22.462 clat (usec): min=198, max=41305, avg=329.85, stdev=652.95 00:44:22.462 lat (usec): min=207, max=41313, avg=338.13, stdev=652.95 00:44:22.462 clat percentiles (usec): 00:44:22.462 | 1.00th=[ 253], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:44:22.462 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 314], 00:44:22.462 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 379], 00:44:22.462 | 99.00th=[ 457], 99.50th=[ 510], 99.90th=[ 1270], 99.95th=[ 1565], 00:44:22.462 | 99.99th=[41157] 00:44:22.462 bw ( KiB/s): min=10336, max=12360, per=34.81%, avg=11740.80, stdev=813.91, samples=5 00:44:22.462 iops : min= 2584, max= 3090, avg=2935.20, stdev=203.48, samples=5 00:44:22.462 lat (usec) : 250=0.92%, 500=98.44%, 750=0.41% 00:44:22.463 lat (msec) : 2=0.19%, 50=0.03% 00:44:22.463 cpu : usr=1.83%, sys=4.52%, ctx=7806, majf=0, minf=2 00:44:22.463 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:22.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.463 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.463 issued rwts: total=7806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.463 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:22.463 00:44:22.463 Run status group 0 (all jobs): 00:44:22.463 READ: bw=32.9MiB/s (34.5MB/s), 109KiB/s-12.3MiB/s (111kB/s-12.9MB/s), io=110MiB (115MB), run=2680-3343msec 00:44:22.463 00:44:22.463 Disk stats (read/write): 00:44:22.463 nvme0n1: ios=9764/0, merge=0/0, ticks=2826/0, in_queue=2826, util=93.50% 00:44:22.463 nvme0n2: ios=10437/0, merge=0/0, ticks=3047/0, in_queue=3047, util=93.63% 00:44:22.463 nvme0n3: ios=77/0, merge=0/0, ticks=2808/0, in_queue=2808, util=96.24% 00:44:22.463 nvme0n4: ios=7522/0, merge=0/0, ticks=2418/0, in_queue=2418, util=96.39% 00:44:22.463 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:22.463 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:22.719 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:22.719 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:22.977 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:22.977 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:23.234 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:23.234 15:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:23.492 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:23.492 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:23.749 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:23.749 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 4184135 00:44:23.749 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:23.749 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:25.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:25.119 nvmf hotplug test: fio failed as expected 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:25.119 rmmod nvme_tcp 00:44:25.119 rmmod nvme_fabrics 00:44:25.119 rmmod nvme_keyring 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 4181453 ']' 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 4181453 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 4181453 ']' 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 4181453 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:25.119 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4181453 00:44:25.378 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:25.378 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:25.378 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4181453' 00:44:25.378 killing process with pid 4181453 00:44:25.378 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 4181453 00:44:25.378 15:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 4181453 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:26.313 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:44:26.314 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:26.314 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:26.314 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:26.314 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:26.314 15:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:28.851 00:44:28.851 real 0m29.229s 00:44:28.851 user 1m38.400s 00:44:28.851 sys 0m11.702s 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.851 ************************************ 00:44:28.851 END TEST nvmf_fio_target 00:44:28.851 ************************************ 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:28.851 15:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:28.851 ************************************ 00:44:28.851 START TEST nvmf_bdevio 00:44:28.851 ************************************ 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:28.851 * Looking for test storage... 00:44:28.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:28.851 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.852 --rc genhtml_branch_coverage=1 00:44:28.852 --rc genhtml_function_coverage=1 00:44:28.852 --rc genhtml_legend=1 00:44:28.852 --rc geninfo_all_blocks=1 00:44:28.852 --rc geninfo_unexecuted_blocks=1 00:44:28.852 00:44:28.852 ' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.852 --rc genhtml_branch_coverage=1 00:44:28.852 --rc genhtml_function_coverage=1 00:44:28.852 --rc genhtml_legend=1 00:44:28.852 --rc geninfo_all_blocks=1 00:44:28.852 --rc geninfo_unexecuted_blocks=1 00:44:28.852 00:44:28.852 ' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.852 --rc genhtml_branch_coverage=1 00:44:28.852 --rc genhtml_function_coverage=1 00:44:28.852 --rc genhtml_legend=1 00:44:28.852 --rc geninfo_all_blocks=1 00:44:28.852 --rc geninfo_unexecuted_blocks=1 00:44:28.852 00:44:28.852 ' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:28.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.852 --rc genhtml_branch_coverage=1 00:44:28.852 --rc genhtml_function_coverage=1 00:44:28.852 --rc genhtml_legend=1 00:44:28.852 --rc geninfo_all_blocks=1 00:44:28.852 --rc geninfo_unexecuted_blocks=1 00:44:28.852 00:44:28.852 ' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:28.852 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:44:28.853 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:44:35.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:44:35.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:44:35.423 Found net devices under 0000:86:00.0: cvl_0_0 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:44:35.423 Found net devices under 0000:86:00.1: cvl_0_1 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:44:35.423 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:35.424 15:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:35.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:35.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:44:35.424 00:44:35.424 --- 10.0.0.2 ping statistics --- 00:44:35.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.424 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:35.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:35.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:44:35.424 00:44:35.424 --- 10.0.0.1 ping statistics --- 00:44:35.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.424 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=4188775 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 4188775 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 4188775 ']' 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:35.424 15:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.424 [2024-11-06 15:49:02.181778] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:35.424 [2024-11-06 15:49:02.183854] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:44:35.424 [2024-11-06 15:49:02.183921] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:35.424 [2024-11-06 15:49:02.312532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:35.424 [2024-11-06 15:49:02.420792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:35.424 [2024-11-06 15:49:02.420832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:35.424 [2024-11-06 15:49:02.420843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:35.424 [2024-11-06 15:49:02.420852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:35.424 [2024-11-06 15:49:02.420861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:35.424 [2024-11-06 15:49:02.423247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:35.424 [2024-11-06 15:49:02.423329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:35.424 [2024-11-06 15:49:02.423393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:35.424 [2024-11-06 15:49:02.423417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:35.424 [2024-11-06 15:49:02.731768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:35.424 [2024-11-06 15:49:02.738970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:35.424 [2024-11-06 15:49:02.739324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:35.424 [2024-11-06 15:49:02.740942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:35.424 [2024-11-06 15:49:02.741488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.424 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.424 [2024-11-06 15:49:03.048469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.683 Malloc0 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.683 [2024-11-06 15:49:03.196830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:35.683 { 00:44:35.683 "params": { 00:44:35.683 "name": "Nvme$subsystem", 00:44:35.683 "trtype": "$TEST_TRANSPORT", 00:44:35.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:35.683 "adrfam": "ipv4", 00:44:35.683 "trsvcid": "$NVMF_PORT", 00:44:35.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:35.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:35.683 "hdgst": ${hdgst:-false}, 00:44:35.683 "ddgst": ${ddgst:-false} 00:44:35.683 }, 00:44:35.683 "method": "bdev_nvme_attach_controller" 00:44:35.683 } 00:44:35.683 EOF 00:44:35.683 )") 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:44:35.683 15:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:35.683 "params": { 00:44:35.683 "name": "Nvme1", 00:44:35.683 "trtype": "tcp", 00:44:35.683 "traddr": "10.0.0.2", 00:44:35.683 "adrfam": "ipv4", 00:44:35.683 "trsvcid": "4420", 00:44:35.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:35.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:35.683 "hdgst": false, 00:44:35.683 "ddgst": false 00:44:35.683 }, 00:44:35.683 "method": "bdev_nvme_attach_controller" 00:44:35.683 }' 00:44:35.684 [2024-11-06 15:49:03.279183] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:44:35.684 [2024-11-06 15:49:03.279310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188983 ] 00:44:35.941 [2024-11-06 15:49:03.406269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:35.941 [2024-11-06 15:49:03.526866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:35.941 [2024-11-06 15:49:03.526880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:35.941 [2024-11-06 15:49:03.526904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:36.506 I/O targets: 00:44:36.506 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:36.506 00:44:36.506 00:44:36.506 CUnit - A unit testing framework for C - Version 2.1-3 00:44:36.506 http://cunit.sourceforge.net/ 00:44:36.506 00:44:36.506 00:44:36.506 Suite: bdevio tests on: Nvme1n1 00:44:36.506 Test: blockdev write read block ...passed 00:44:36.506 Test: blockdev write zeroes read block ...passed 00:44:36.506 Test: blockdev write zeroes read no split ...passed 00:44:36.764 Test: blockdev write zeroes read split ...passed 00:44:36.764 Test: blockdev write zeroes read split partial ...passed 00:44:36.764 Test: blockdev reset ...[2024-11-06 15:49:04.229325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:44:36.764 [2024-11-06 15:49:04.229430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032e680 (9): Bad file descriptor 00:44:36.764 [2024-11-06 15:49:04.236521] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:44:36.764 passed 00:44:36.764 Test: blockdev write read 8 blocks ...passed 00:44:36.764 Test: blockdev write read size > 128k ...passed 00:44:36.764 Test: blockdev write read invalid size ...passed 00:44:36.764 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:36.764 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:36.764 Test: blockdev write read max offset ...passed 00:44:36.764 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:36.764 Test: blockdev writev readv 8 blocks ...passed 00:44:36.764 Test: blockdev writev readv 30 x 1block ...passed 00:44:37.022 Test: blockdev writev readv block ...passed 00:44:37.022 Test: blockdev writev readv size > 128k ...passed 00:44:37.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:37.022 Test: blockdev comparev and writev ...[2024-11-06 15:49:04.411494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.411529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.411549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.411561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.411923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.411940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.411956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.411968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.412314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.412332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.412351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.412362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.412731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.412748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:37.022 [2024-11-06 15:49:04.412759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:37.022 passed 00:44:37.022 Test: blockdev nvme passthru rw ...passed 00:44:37.022 Test: blockdev nvme passthru vendor specific ...[2024-11-06 15:49:04.494595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:37.022 [2024-11-06 15:49:04.494624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.494760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:37.022 [2024-11-06 15:49:04.494775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.494906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:37.022 [2024-11-06 15:49:04.494920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:37.022 [2024-11-06 15:49:04.495058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:37.022 [2024-11-06 15:49:04.495072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:37.022 passed 00:44:37.022 Test: blockdev nvme admin passthru ...passed 00:44:37.022 Test: blockdev copy ...passed 00:44:37.022 00:44:37.022 Run Summary: Type Total Ran Passed Failed Inactive 00:44:37.022 suites 1 1 n/a 0 0 00:44:37.022 tests 23 23 23 0 0 00:44:37.022 asserts 152 152 152 0 n/a 00:44:37.022 00:44:37.022 Elapsed time = 1.110 seconds 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:37.956 rmmod nvme_tcp 00:44:37.956 rmmod nvme_fabrics 00:44:37.956 rmmod nvme_keyring 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 4188775 ']' 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 4188775 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 4188775 ']' 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 4188775 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4188775 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4188775' 00:44:37.956 killing process with pid 4188775 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 4188775 00:44:37.956 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 4188775 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:39.331 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.867 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:41.868 00:44:41.868 real 0m12.879s 00:44:41.868 user 0m17.471s 00:44:41.868 sys 0m5.664s 00:44:41.868 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:41.868 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:41.868 ************************************ 00:44:41.868 END TEST nvmf_bdevio 00:44:41.868 ************************************ 00:44:41.868 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:41.868 00:44:41.868 real 5m4.194s 00:44:41.868 user 10m7.035s 00:44:41.868 sys 1m56.787s 00:44:41.868 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:41.868 15:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:41.868 ************************************ 00:44:41.868 END TEST nvmf_target_core_interrupt_mode 00:44:41.868 ************************************ 00:44:41.868 15:49:08 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:41.868 15:49:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:41.868 15:49:08 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:41.868 15:49:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.868 ************************************ 00:44:41.868 START TEST nvmf_interrupt 00:44:41.868 ************************************ 00:44:41.868 15:49:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:41.868 * Looking for test storage... 00:44:41.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.868 --rc genhtml_branch_coverage=1 00:44:41.868 --rc genhtml_function_coverage=1 00:44:41.868 --rc genhtml_legend=1 00:44:41.868 --rc geninfo_all_blocks=1 00:44:41.868 --rc geninfo_unexecuted_blocks=1 00:44:41.868 00:44:41.868 ' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.868 --rc genhtml_branch_coverage=1 00:44:41.868 --rc genhtml_function_coverage=1 00:44:41.868 --rc genhtml_legend=1 00:44:41.868 --rc geninfo_all_blocks=1 00:44:41.868 --rc geninfo_unexecuted_blocks=1 00:44:41.868 00:44:41.868 ' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.868 --rc genhtml_branch_coverage=1 00:44:41.868 --rc genhtml_function_coverage=1 00:44:41.868 --rc genhtml_legend=1 00:44:41.868 --rc geninfo_all_blocks=1 00:44:41.868 --rc geninfo_unexecuted_blocks=1 00:44:41.868 00:44:41.868 ' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:41.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.868 --rc genhtml_branch_coverage=1 00:44:41.868 --rc genhtml_function_coverage=1 00:44:41.868 --rc genhtml_legend=1 00:44:41.868 --rc geninfo_all_blocks=1 00:44:41.868 --rc geninfo_unexecuted_blocks=1 00:44:41.868 00:44:41.868 ' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.868 15:49:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:44:41.869 15:49:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:44:48.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:48.439 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:44:48.440 Found 0000:86:00.1 (0x8086 - 0x159b) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:44:48.440 Found net devices under 0000:86:00.0: cvl_0_0 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:44:48.440 Found net devices under 0000:86:00.1: cvl_0_1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:48.440 15:49:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:48.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:48.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:44:48.440 00:44:48.440 --- 10.0.0.2 ping statistics --- 00:44:48.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:48.440 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:48.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:48.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:44:48.440 00:44:48.440 --- 10.0.0.1 ping statistics --- 00:44:48.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:48.440 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4192980 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4192980 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 4192980 ']' 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.440 [2024-11-06 15:49:15.189180] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:48.440 [2024-11-06 15:49:15.191296] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:44:48.440 [2024-11-06 15:49:15.191363] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:48.440 [2024-11-06 15:49:15.319266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:48.440 [2024-11-06 15:49:15.428537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:48.440 [2024-11-06 15:49:15.428576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:48.440 [2024-11-06 15:49:15.428588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:48.440 [2024-11-06 15:49:15.428597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:48.440 [2024-11-06 15:49:15.428610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:48.440 [2024-11-06 15:49:15.430716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.440 [2024-11-06 15:49:15.430737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:48.440 [2024-11-06 15:49:15.730262] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:48.440 [2024-11-06 15:49:15.730474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:48.440 [2024-11-06 15:49:15.730717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:44:48.440 15:49:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:48.441 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:48.441 15:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:44:48.441 5000+0 records in 00:44:48.441 5000+0 records out 00:44:48.441 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0160269 s, 639 MB/s 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:48.441 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.700 AIO0 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.700 [2024-11-06 15:49:16.095866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:48.700 [2024-11-06 15:49:16.144244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:48.700 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4192980 0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 0 idle 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4192980 root 20 0 20.1t 205824 99072 S 0.0 0.1 0:00.64 reactor_0' 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4192980 root 20 0 20.1t 205824 99072 S 0.0 0.1 0:00.64 reactor_0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4192980 1 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 1 idle 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:48.701 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4193029 root 20 0 20.1t 205824 99072 S 0.0 0.1 0:00.00 reactor_1' 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4193029 root 20 0 20.1t 205824 99072 S 0.0 0.1 0:00.00 reactor_1 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4193199 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4192980 0 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4192980 0 busy 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:44:48.960 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4192980 root 20 0 20.1t 208896 100608 R 26.7 0.1 0:00.68 reactor_0' 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4192980 root 20 0 20.1t 208896 100608 R 26.7 0.1 0:00.68 reactor_0 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=26.7 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=26 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:49.218 15:49:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:44:50.151 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:44:50.151 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:50.151 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:44:50.151 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4192980 root 20 0 20.1t 219648 100608 R 99.9 0.1 0:03.04 reactor_0' 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4192980 root 20 0 20.1t 219648 100608 R 99.9 0.1 0:03.04 reactor_0 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4192980 1 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4192980 1 busy 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:44:50.409 15:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4193029 root 20 0 20.1t 219648 100608 R 99.9 0.1 0:01.39 reactor_1' 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4193029 root 20 0 20.1t 219648 100608 R 99.9 0.1 0:01.39 reactor_1 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:50.667 15:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4193199 00:45:00.629 Initializing NVMe Controllers 00:45:00.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:00.629 Controller IO queue size 256, less than required. 00:45:00.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:00.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:45:00.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:45:00.629 Initialization complete. Launching workers. 00:45:00.629 ======================================================== 00:45:00.629 Latency(us) 00:45:00.629 Device Information : IOPS MiB/s Average min max 00:45:00.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15029.07 58.71 17044.71 4754.93 59959.00 00:45:00.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14878.47 58.12 17211.53 8659.17 29966.00 00:45:00.629 ======================================================== 00:45:00.629 Total : 29907.55 116.83 17127.70 4754.93 59959.00 00:45:00.629 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4192980 0 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 0 idle 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4192980 root 20 0 20.1t 219648 100608 S 6.7 0.1 0:20.65 reactor_0' 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4192980 root 20 0 20.1t 219648 100608 S 6.7 0.1 0:20.65 reactor_0 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4192980 1 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 1 idle 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:45:00.629 15:49:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4193029 root 20 0 20.1t 219648 100608 S 0.0 0.1 0:10.02 reactor_1' 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4193029 root 20 0 20.1t 219648 100608 S 0.0 0.1 0:10.02 reactor_1 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:00.629 15:49:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:00.629 15:49:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:45:00.629 15:49:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:45:00.629 15:49:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:45:00.629 15:49:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:45:00.629 15:49:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4192980 0 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 0 idle 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:45:02.535 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4192980 root 20 0 20.1t 273408 119040 S 0.0 0.1 0:21.40 reactor_0' 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4192980 root 20 0 20.1t 273408 119040 S 0.0 0.1 0:21.40 reactor_0 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4192980 1 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4192980 1 idle 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4192980 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4192980 -w 256 00:45:02.794 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4193029 root 20 0 20.1t 273408 119040 S 0.0 0.1 0:10.35 reactor_1' 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4193029 root 20 0 20.1t 273408 119040 S 0.0 0.1 0:10.35 reactor_1 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:03.052 15:49:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:03.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:03.620 rmmod nvme_tcp 00:45:03.620 rmmod nvme_fabrics 00:45:03.620 rmmod nvme_keyring 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4192980 ']' 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4192980 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 4192980 ']' 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 4192980 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 4192980 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 4192980' 00:45:03.620 killing process with pid 4192980 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 4192980 00:45:03.620 15:49:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 4192980 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:05.103 15:49:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:07.006 15:49:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:07.006 00:45:07.006 real 0m25.413s 00:45:07.006 user 0m41.928s 00:45:07.006 sys 0m9.116s 00:45:07.006 15:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:07.006 15:49:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:07.006 ************************************ 00:45:07.006 END TEST nvmf_interrupt 00:45:07.006 ************************************ 00:45:07.006 00:45:07.006 real 38m1.305s 00:45:07.006 user 92m29.424s 00:45:07.006 sys 10m15.272s 00:45:07.006 15:49:34 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:07.006 15:49:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:07.006 ************************************ 00:45:07.006 END TEST nvmf_tcp 00:45:07.006 ************************************ 00:45:07.006 15:49:34 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:45:07.006 15:49:34 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:07.006 15:49:34 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:07.006 15:49:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:07.006 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:45:07.006 ************************************ 00:45:07.006 START TEST spdkcli_nvmf_tcp 00:45:07.006 ************************************ 00:45:07.006 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:07.006 * Looking for test storage... 00:45:07.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:45:07.006 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:07.006 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:45:07.006 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:07.266 --rc genhtml_branch_coverage=1 00:45:07.266 --rc genhtml_function_coverage=1 00:45:07.266 --rc genhtml_legend=1 00:45:07.266 --rc geninfo_all_blocks=1 00:45:07.266 --rc geninfo_unexecuted_blocks=1 00:45:07.266 00:45:07.266 ' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:07.266 --rc genhtml_branch_coverage=1 00:45:07.266 --rc genhtml_function_coverage=1 00:45:07.266 --rc genhtml_legend=1 00:45:07.266 --rc geninfo_all_blocks=1 00:45:07.266 --rc geninfo_unexecuted_blocks=1 00:45:07.266 00:45:07.266 ' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:07.266 --rc genhtml_branch_coverage=1 00:45:07.266 --rc genhtml_function_coverage=1 00:45:07.266 --rc genhtml_legend=1 00:45:07.266 --rc geninfo_all_blocks=1 00:45:07.266 --rc geninfo_unexecuted_blocks=1 00:45:07.266 00:45:07.266 ' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:07.266 --rc genhtml_branch_coverage=1 00:45:07.266 --rc genhtml_function_coverage=1 00:45:07.266 --rc genhtml_legend=1 00:45:07.266 --rc geninfo_all_blocks=1 00:45:07.266 --rc geninfo_unexecuted_blocks=1 00:45:07.266 00:45:07.266 ' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:07.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2612 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2612 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2612 ']' 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:07.266 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:07.267 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:07.267 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:07.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:07.267 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:07.267 15:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:07.267 [2024-11-06 15:49:34.804579] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:45:07.267 [2024-11-06 15:49:34.804669] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2612 ] 00:45:07.525 [2024-11-06 15:49:34.926258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:07.525 [2024-11-06 15:49:35.025461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.525 [2024-11-06 15:49:35.025483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:08.091 15:49:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:08.092 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:08.092 15:49:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:08.092 15:49:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:08.092 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:08.092 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:08.092 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:08.092 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:08.092 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:08.092 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:08.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:08.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:08.092 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:08.092 ' 00:45:11.374 [2024-11-06 15:49:38.506759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:12.307 [2024-11-06 15:49:39.843361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:14.835 [2024-11-06 15:49:42.323292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:17.362 [2024-11-06 15:49:44.474100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:18.736 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:18.736 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:18.736 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:18.736 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:18.736 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:18.736 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:18.736 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:45:18.736 15:49:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:19.302 15:49:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:19.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:19.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:19.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:19.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:19.302 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:19.302 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:19.302 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:19.302 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:19.302 ' 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:25.858 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:25.858 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:25.858 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:25.858 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2612 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2612 ']' 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2612 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2612 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2612' 00:45:25.858 killing process with pid 2612 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2612 00:45:25.858 15:49:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2612 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2612 ']' 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2612 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2612 ']' 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2612 00:45:26.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2612) - No such process 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2612 is not found' 00:45:26.795 Process with pid 2612 is not found 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:26.795 00:45:26.795 real 0m19.560s 00:45:26.795 user 0m42.355s 00:45:26.795 sys 0m0.948s 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:26.795 15:49:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:26.795 ************************************ 00:45:26.795 END TEST spdkcli_nvmf_tcp 00:45:26.795 ************************************ 00:45:26.795 15:49:54 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:26.795 15:49:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:26.795 15:49:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:26.795 15:49:54 -- common/autotest_common.sh@10 -- # set +x 00:45:26.795 ************************************ 00:45:26.795 START TEST nvmf_identify_passthru 00:45:26.795 ************************************ 00:45:26.795 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:26.795 * Looking for test storage... 00:45:26.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:26.795 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:26.795 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:45:26.795 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:26.795 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:26.795 15:49:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:26.796 --rc genhtml_branch_coverage=1 00:45:26.796 --rc genhtml_function_coverage=1 00:45:26.796 --rc genhtml_legend=1 00:45:26.796 --rc geninfo_all_blocks=1 00:45:26.796 --rc geninfo_unexecuted_blocks=1 00:45:26.796 00:45:26.796 ' 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:26.796 --rc genhtml_branch_coverage=1 00:45:26.796 --rc genhtml_function_coverage=1 00:45:26.796 --rc genhtml_legend=1 00:45:26.796 --rc geninfo_all_blocks=1 00:45:26.796 --rc geninfo_unexecuted_blocks=1 00:45:26.796 00:45:26.796 ' 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:26.796 --rc genhtml_branch_coverage=1 00:45:26.796 --rc genhtml_function_coverage=1 00:45:26.796 --rc genhtml_legend=1 00:45:26.796 --rc geninfo_all_blocks=1 00:45:26.796 --rc geninfo_unexecuted_blocks=1 00:45:26.796 00:45:26.796 ' 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:26.796 --rc genhtml_branch_coverage=1 00:45:26.796 --rc genhtml_function_coverage=1 00:45:26.796 --rc genhtml_legend=1 00:45:26.796 --rc geninfo_all_blocks=1 00:45:26.796 --rc geninfo_unexecuted_blocks=1 00:45:26.796 00:45:26.796 ' 00:45:26.796 15:49:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:26.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:26.796 15:49:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:26.796 15:49:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:26.796 15:49:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:26.796 15:49:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:26.796 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:26.796 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:26.797 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:26.797 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:26.797 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:26.797 15:49:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:26.797 15:49:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:33.368 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:45:33.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:45:33.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:45:33.369 Found net devices under 0000:86:00.0: cvl_0_0 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:45:33.369 Found net devices under 0000:86:00.1: cvl_0_1 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:33.369 15:49:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:33.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:33.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:45:33.369 00:45:33.369 --- 10.0.0.2 ping statistics --- 00:45:33.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.369 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:33.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:33.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:45:33.369 00:45:33.369 --- 10.0.0.1 ping statistics --- 00:45:33.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.369 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:33.369 15:50:00 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:45:33.369 15:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:45:33.369 15:50:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:45:38.638 15:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:45:38.638 15:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:45:38.638 15:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:45:38.638 15:50:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:45:42.826 15:50:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:45:42.826 15:50:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:42.826 15:50:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:42.826 15:50:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:42.826 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:42.826 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:42.826 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:42.826 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=10489 00:45:42.826 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:42.826 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:42.826 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 10489 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 10489 ']' 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:42.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:42.827 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:42.827 [2024-11-06 15:50:10.099538] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:45:42.827 [2024-11-06 15:50:10.099650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:42.827 [2024-11-06 15:50:10.233542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:42.827 [2024-11-06 15:50:10.341626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:42.827 [2024-11-06 15:50:10.341671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:42.827 [2024-11-06 15:50:10.341681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:42.827 [2024-11-06 15:50:10.341691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:42.827 [2024-11-06 15:50:10.341699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:42.827 [2024-11-06 15:50:10.344085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:42.827 [2024-11-06 15:50:10.344165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:42.827 [2024-11-06 15:50:10.344250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.827 [2024-11-06 15:50:10.344271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:45:43.391 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:43.391 INFO: Log level set to 20 00:45:43.391 INFO: Requests: 00:45:43.391 { 00:45:43.391 "jsonrpc": "2.0", 00:45:43.391 "method": "nvmf_set_config", 00:45:43.391 "id": 1, 00:45:43.391 "params": { 00:45:43.391 "admin_cmd_passthru": { 00:45:43.391 "identify_ctrlr": true 00:45:43.391 } 00:45:43.391 } 00:45:43.391 } 00:45:43.391 00:45:43.391 INFO: response: 00:45:43.391 { 00:45:43.391 "jsonrpc": "2.0", 00:45:43.391 "id": 1, 00:45:43.391 "result": true 00:45:43.391 } 00:45:43.391 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:43.391 15:50:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:43.391 15:50:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:43.391 INFO: Setting log level to 20 00:45:43.391 INFO: Setting log level to 20 00:45:43.391 INFO: Log level set to 20 00:45:43.391 INFO: Log level set to 20 00:45:43.391 INFO: Requests: 00:45:43.391 { 00:45:43.391 "jsonrpc": "2.0", 00:45:43.391 "method": "framework_start_init", 00:45:43.391 "id": 1 00:45:43.391 } 00:45:43.391 00:45:43.391 INFO: Requests: 00:45:43.391 { 00:45:43.391 "jsonrpc": "2.0", 00:45:43.391 "method": "framework_start_init", 00:45:43.391 "id": 1 00:45:43.391 } 00:45:43.391 00:45:43.650 [2024-11-06 15:50:11.252241] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:43.650 INFO: response: 00:45:43.650 { 00:45:43.650 "jsonrpc": "2.0", 00:45:43.650 "id": 1, 00:45:43.650 "result": true 00:45:43.650 } 00:45:43.650 00:45:43.650 INFO: response: 00:45:43.650 { 00:45:43.650 "jsonrpc": "2.0", 00:45:43.650 "id": 1, 00:45:43.650 "result": true 00:45:43.650 } 00:45:43.650 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:43.650 15:50:11 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:43.650 INFO: Setting log level to 40 00:45:43.650 INFO: Setting log level to 40 00:45:43.650 INFO: Setting log level to 40 00:45:43.650 [2024-11-06 15:50:11.268742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:43.650 15:50:11 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:43.650 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:43.907 15:50:11 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:45:43.907 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:43.907 15:50:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 Nvme0n1 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 [2024-11-06 15:50:14.239605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 [ 00:45:47.183 { 00:45:47.183 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:47.183 "subtype": "Discovery", 00:45:47.183 "listen_addresses": [], 00:45:47.183 "allow_any_host": true, 00:45:47.183 "hosts": [] 00:45:47.183 }, 00:45:47.183 { 00:45:47.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:47.183 "subtype": "NVMe", 00:45:47.183 "listen_addresses": [ 00:45:47.183 { 00:45:47.183 "trtype": "TCP", 00:45:47.183 "adrfam": "IPv4", 00:45:47.183 "traddr": "10.0.0.2", 00:45:47.183 "trsvcid": "4420" 00:45:47.183 } 00:45:47.183 ], 00:45:47.183 "allow_any_host": true, 00:45:47.183 "hosts": [], 00:45:47.183 "serial_number": "SPDK00000000000001", 00:45:47.183 "model_number": "SPDK bdev Controller", 00:45:47.183 "max_namespaces": 1, 00:45:47.183 "min_cntlid": 1, 00:45:47.183 "max_cntlid": 65519, 00:45:47.183 "namespaces": [ 00:45:47.183 { 00:45:47.183 "nsid": 1, 00:45:47.183 "bdev_name": "Nvme0n1", 00:45:47.183 "name": "Nvme0n1", 00:45:47.183 "nguid": "42D99F810A3E47F3A9017E70156CC36E", 00:45:47.183 "uuid": "42d99f81-0a3e-47f3-a901-7e70156cc36e" 00:45:47.183 } 00:45:47.183 ] 00:45:47.183 } 00:45:47.183 ] 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:47.183 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:47.183 15:50:14 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:47.183 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:47.183 rmmod nvme_tcp 00:45:47.441 rmmod nvme_fabrics 00:45:47.441 rmmod nvme_keyring 00:45:47.441 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:47.441 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:45:47.441 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:45:47.441 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 10489 ']' 00:45:47.441 15:50:14 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 10489 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 10489 ']' 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 10489 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 10489 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 10489' 00:45:47.441 killing process with pid 10489 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 10489 00:45:47.441 15:50:14 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 10489 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:50.718 15:50:17 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:50.718 15:50:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:50.718 15:50:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:52.624 15:50:19 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:52.624 00:45:52.624 real 0m25.831s 00:45:52.624 user 0m37.150s 00:45:52.624 sys 0m6.451s 00:45:52.624 15:50:19 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:52.624 15:50:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:52.624 ************************************ 00:45:52.624 END TEST nvmf_identify_passthru 00:45:52.624 ************************************ 00:45:52.624 15:50:20 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:52.624 15:50:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:45:52.624 15:50:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:52.624 15:50:20 -- common/autotest_common.sh@10 -- # set +x 00:45:52.624 ************************************ 00:45:52.624 START TEST nvmf_dif 00:45:52.624 ************************************ 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:52.624 * Looking for test storage... 00:45:52.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:52.624 --rc genhtml_branch_coverage=1 00:45:52.624 --rc genhtml_function_coverage=1 00:45:52.624 --rc genhtml_legend=1 00:45:52.624 --rc geninfo_all_blocks=1 00:45:52.624 --rc geninfo_unexecuted_blocks=1 00:45:52.624 00:45:52.624 ' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:52.624 --rc genhtml_branch_coverage=1 00:45:52.624 --rc genhtml_function_coverage=1 00:45:52.624 --rc genhtml_legend=1 00:45:52.624 --rc geninfo_all_blocks=1 00:45:52.624 --rc geninfo_unexecuted_blocks=1 00:45:52.624 00:45:52.624 ' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:52.624 --rc genhtml_branch_coverage=1 00:45:52.624 --rc genhtml_function_coverage=1 00:45:52.624 --rc genhtml_legend=1 00:45:52.624 --rc geninfo_all_blocks=1 00:45:52.624 --rc geninfo_unexecuted_blocks=1 00:45:52.624 00:45:52.624 ' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:52.624 --rc genhtml_branch_coverage=1 00:45:52.624 --rc genhtml_function_coverage=1 00:45:52.624 --rc genhtml_legend=1 00:45:52.624 --rc geninfo_all_blocks=1 00:45:52.624 --rc geninfo_unexecuted_blocks=1 00:45:52.624 00:45:52.624 ' 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:52.624 15:50:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:52.624 15:50:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.624 15:50:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.624 15:50:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.624 15:50:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:52.624 15:50:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:52.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:52.624 15:50:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:52.624 15:50:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:52.624 15:50:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:52.883 15:50:20 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:52.883 15:50:20 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:52.883 15:50:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:45:52.883 15:50:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:45:58.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:45:58.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:45:58.158 Found net devices under 0000:86:00.0: cvl_0_0 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:58.158 15:50:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:45:58.158 Found net devices under 0000:86:00.1: cvl_0_1 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:58.159 15:50:25 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:58.418 15:50:25 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:58.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:58.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:45:58.418 00:45:58.418 --- 10.0.0.2 ping statistics --- 00:45:58.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.418 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:58.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:58.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:45:58.418 00:45:58.418 --- 10.0.0.1 ping statistics --- 00:45:58.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:58.418 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:45:58.418 15:50:26 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:01.710 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:46:01.710 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:46:01.710 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:01.710 15:50:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:01.710 15:50:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=16217 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 16217 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 16217 ']' 00:46:01.710 15:50:28 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:01.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:46:01.710 15:50:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:01.710 [2024-11-06 15:50:29.007146] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:46:01.710 [2024-11-06 15:50:29.007246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:01.710 [2024-11-06 15:50:29.135608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:01.710 [2024-11-06 15:50:29.238374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:01.710 [2024-11-06 15:50:29.238419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:01.710 [2024-11-06 15:50:29.238430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:01.710 [2024-11-06 15:50:29.238441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:01.710 [2024-11-06 15:50:29.238449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:01.710 [2024-11-06 15:50:29.239715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:02.277 15:50:29 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:46:02.277 15:50:29 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:46:02.277 15:50:29 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:02.277 15:50:29 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:02.277 15:50:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:02.277 15:50:29 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:02.277 15:50:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:46:02.277 15:50:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 [2024-11-06 15:50:29.839260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.278 15:50:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 ************************************ 00:46:02.278 START TEST fio_dif_1_default 00:46:02.278 ************************************ 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 bdev_null0 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:02.278 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:02.278 [2024-11-06 15:50:29.911568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:02.537 { 00:46:02.537 "params": { 00:46:02.537 "name": "Nvme$subsystem", 00:46:02.537 "trtype": "$TEST_TRANSPORT", 00:46:02.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:02.537 "adrfam": "ipv4", 00:46:02.537 "trsvcid": "$NVMF_PORT", 00:46:02.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:02.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:02.537 "hdgst": ${hdgst:-false}, 00:46:02.537 "ddgst": ${ddgst:-false} 00:46:02.537 }, 00:46:02.537 "method": "bdev_nvme_attach_controller" 00:46:02.537 } 00:46:02.537 EOF 00:46:02.537 )") 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:02.537 "params": { 00:46:02.537 "name": "Nvme0", 00:46:02.537 "trtype": "tcp", 00:46:02.537 "traddr": "10.0.0.2", 00:46:02.537 "adrfam": "ipv4", 00:46:02.537 "trsvcid": "4420", 00:46:02.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:02.537 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:02.537 "hdgst": false, 00:46:02.537 "ddgst": false 00:46:02.537 }, 00:46:02.537 "method": "bdev_nvme_attach_controller" 00:46:02.537 }' 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # break 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:02.537 15:50:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:02.796 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:02.796 fio-3.35 00:46:02.796 Starting 1 thread 00:46:14.999 00:46:14.999 filename0: (groupid=0, jobs=1): err= 0: pid=16790: Wed Nov 6 15:50:41 2024 00:46:14.999 read: IOPS=193, BW=773KiB/s (791kB/s)(7744KiB/10023msec) 00:46:14.999 slat (nsec): min=6724, max=39298, avg=8088.65, stdev=2195.20 00:46:14.999 clat (usec): min=427, max=44648, avg=20684.11, stdev=20521.46 00:46:14.999 lat (usec): min=434, max=44687, avg=20692.20, stdev=20521.02 00:46:14.999 clat percentiles (usec): 00:46:14.999 | 1.00th=[ 453], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 478], 00:46:14.999 | 30.00th=[ 490], 40.00th=[ 519], 50.00th=[ 652], 60.00th=[41157], 00:46:14.999 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:46:14.999 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:46:14.999 | 99.99th=[44827] 00:46:14.999 bw ( KiB/s): min= 672, max= 832, per=99.92%, avg=772.80, stdev=39.23, samples=20 00:46:14.999 iops : min= 168, max= 208, avg=193.20, stdev= 9.81, samples=20 00:46:14.999 lat (usec) : 500=35.64%, 750=15.19% 00:46:14.999 lat (msec) : 50=49.17% 00:46:14.999 cpu : usr=94.15%, sys=5.53%, ctx=12, majf=0, minf=1634 00:46:14.999 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.999 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.999 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:14.999 00:46:14.999 Run status group 0 (all jobs): 00:46:14.999 READ: bw=773KiB/s (791kB/s), 773KiB/s-773KiB/s (791kB/s-791kB/s), io=7744KiB (7930kB), run=10023-10023msec 00:46:14.999 ----------------------------------------------------- 00:46:14.999 Suppressions used: 00:46:14.999 count bytes template 00:46:14.999 1 8 /usr/src/fio/parse.c 00:46:14.999 1 8 libtcmalloc_minimal.so 00:46:14.999 1 904 libcrypto.so 00:46:14.999 ----------------------------------------------------- 00:46:14.999 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 00:46:14.999 real 0m12.503s 00:46:14.999 user 0m17.601s 00:46:14.999 sys 0m1.086s 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 ************************************ 00:46:14.999 END TEST fio_dif_1_default 00:46:14.999 ************************************ 00:46:14.999 15:50:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:14.999 15:50:42 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:14.999 15:50:42 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 ************************************ 00:46:14.999 START TEST fio_dif_1_multi_subsystems 00:46:14.999 ************************************ 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 bdev_null0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 [2024-11-06 15:50:42.486326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 bdev_null1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:14.999 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:15.000 { 00:46:15.000 "params": { 00:46:15.000 "name": "Nvme$subsystem", 00:46:15.000 "trtype": "$TEST_TRANSPORT", 00:46:15.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:15.000 "adrfam": "ipv4", 00:46:15.000 "trsvcid": "$NVMF_PORT", 00:46:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:15.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:15.000 "hdgst": ${hdgst:-false}, 00:46:15.000 "ddgst": ${ddgst:-false} 00:46:15.000 }, 00:46:15.000 "method": "bdev_nvme_attach_controller" 00:46:15.000 } 00:46:15.000 EOF 00:46:15.000 )") 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:15.000 { 00:46:15.000 "params": { 00:46:15.000 "name": "Nvme$subsystem", 00:46:15.000 "trtype": "$TEST_TRANSPORT", 00:46:15.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:15.000 "adrfam": "ipv4", 00:46:15.000 "trsvcid": "$NVMF_PORT", 00:46:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:15.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:15.000 "hdgst": ${hdgst:-false}, 00:46:15.000 "ddgst": ${ddgst:-false} 00:46:15.000 }, 00:46:15.000 "method": "bdev_nvme_attach_controller" 00:46:15.000 } 00:46:15.000 EOF 00:46:15.000 )") 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:15.000 "params": { 00:46:15.000 "name": "Nvme0", 00:46:15.000 "trtype": "tcp", 00:46:15.000 "traddr": "10.0.0.2", 00:46:15.000 "adrfam": "ipv4", 00:46:15.000 "trsvcid": "4420", 00:46:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:15.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:15.000 "hdgst": false, 00:46:15.000 "ddgst": false 00:46:15.000 }, 00:46:15.000 "method": "bdev_nvme_attach_controller" 00:46:15.000 },{ 00:46:15.000 "params": { 00:46:15.000 "name": "Nvme1", 00:46:15.000 "trtype": "tcp", 00:46:15.000 "traddr": "10.0.0.2", 00:46:15.000 "adrfam": "ipv4", 00:46:15.000 "trsvcid": "4420", 00:46:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:15.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:15.000 "hdgst": false, 00:46:15.000 "ddgst": false 00:46:15.000 }, 00:46:15.000 "method": "bdev_nvme_attach_controller" 00:46:15.000 }' 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # break 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:15.000 15:50:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:15.565 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:15.565 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:15.565 fio-3.35 00:46:15.565 Starting 2 threads 00:46:27.889 00:46:27.889 filename0: (groupid=0, jobs=1): err= 0: pid=18856: Wed Nov 6 15:50:54 2024 00:46:27.889 read: IOPS=193, BW=772KiB/s (791kB/s)(7728KiB/10008msec) 00:46:27.889 slat (nsec): min=6846, max=30640, avg=8323.42, stdev=2161.44 00:46:27.889 clat (usec): min=420, max=42570, avg=20696.00, stdev=20555.12 00:46:27.889 lat (usec): min=427, max=42579, avg=20704.32, stdev=20554.59 00:46:27.889 clat percentiles (usec): 00:46:27.889 | 1.00th=[ 465], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 494], 00:46:27.889 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 668], 60.00th=[41157], 00:46:27.889 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:46:27.889 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:46:27.889 | 99.99th=[42730] 00:46:27.889 bw ( KiB/s): min= 704, max= 832, per=49.63%, avg=771.20, stdev=25.22, samples=20 00:46:27.889 iops : min= 176, max= 208, avg=192.80, stdev= 6.30, samples=20 00:46:27.889 lat (usec) : 500=28.11%, 750=22.57%, 1000=0.05% 00:46:27.889 lat (msec) : 4=0.21%, 50=49.07% 00:46:27.889 cpu : usr=96.38%, sys=3.34%, ctx=12, majf=0, minf=1632 00:46:27.889 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:27.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.889 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.889 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:27.889 filename1: (groupid=0, jobs=1): err= 0: pid=18858: Wed Nov 6 15:50:54 2024 00:46:27.889 read: IOPS=195, BW=782KiB/s (801kB/s)(7840KiB/10021msec) 00:46:27.889 slat (nsec): min=6824, max=28439, avg=8182.22, stdev=1916.05 00:46:27.889 clat (usec): min=448, max=43804, avg=20426.85, stdev=20486.50 00:46:27.889 lat (usec): min=456, max=43832, avg=20435.03, stdev=20486.06 00:46:27.889 clat percentiles (usec): 00:46:27.889 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 486], 00:46:27.889 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 660], 60.00th=[41157], 00:46:27.889 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:46:27.889 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:46:27.889 | 99.99th=[43779] 00:46:27.889 bw ( KiB/s): min= 672, max= 960, per=50.34%, avg=782.40, stdev=60.96, samples=20 00:46:27.889 iops : min= 168, max= 240, avg=195.60, stdev=15.24, samples=20 00:46:27.889 lat (usec) : 500=29.34%, 750=21.48%, 1000=0.61% 00:46:27.889 lat (msec) : 50=48.57% 00:46:27.889 cpu : usr=96.87%, sys=2.85%, ctx=13, majf=0, minf=1636 00:46:27.889 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:27.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.889 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.889 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:27.889 00:46:27.889 Run status group 0 (all jobs): 00:46:27.889 READ: bw=1554KiB/s (1591kB/s), 772KiB/s-782KiB/s (791kB/s-801kB/s), io=15.2MiB (15.9MB), run=10008-10021msec 00:46:27.889 ----------------------------------------------------- 00:46:27.889 Suppressions used: 00:46:27.889 count bytes template 00:46:27.889 2 16 /usr/src/fio/parse.c 00:46:27.889 1 8 libtcmalloc_minimal.so 00:46:27.889 1 904 libcrypto.so 00:46:27.889 ----------------------------------------------------- 00:46:27.889 00:46:27.889 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 00:46:27.890 real 0m12.694s 00:46:27.890 user 0m27.437s 00:46:27.890 sys 0m1.197s 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 ************************************ 00:46:27.890 END TEST fio_dif_1_multi_subsystems 00:46:27.890 ************************************ 00:46:27.890 15:50:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:27.890 15:50:55 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:27.890 15:50:55 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 ************************************ 00:46:27.890 START TEST fio_dif_rand_params 00:46:27.890 ************************************ 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 bdev_null0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.890 [2024-11-06 15:50:55.257050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:27.890 { 00:46:27.890 "params": { 00:46:27.890 "name": "Nvme$subsystem", 00:46:27.890 "trtype": "$TEST_TRANSPORT", 00:46:27.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:27.890 "adrfam": "ipv4", 00:46:27.890 "trsvcid": "$NVMF_PORT", 00:46:27.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:27.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:27.890 "hdgst": ${hdgst:-false}, 00:46:27.890 "ddgst": ${ddgst:-false} 00:46:27.890 }, 00:46:27.890 "method": "bdev_nvme_attach_controller" 00:46:27.890 } 00:46:27.890 EOF 00:46:27.890 )") 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:27.890 "params": { 00:46:27.890 "name": "Nvme0", 00:46:27.890 "trtype": "tcp", 00:46:27.890 "traddr": "10.0.0.2", 00:46:27.890 "adrfam": "ipv4", 00:46:27.890 "trsvcid": "4420", 00:46:27.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:27.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:27.890 "hdgst": false, 00:46:27.890 "ddgst": false 00:46:27.890 }, 00:46:27.890 "method": "bdev_nvme_attach_controller" 00:46:27.890 }' 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:27.890 15:50:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:28.149 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:28.149 ... 00:46:28.149 fio-3.35 00:46:28.149 Starting 3 threads 00:46:34.712 00:46:34.713 filename0: (groupid=0, jobs=1): err= 0: pid=20938: Wed Nov 6 15:51:01 2024 00:46:34.713 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(167MiB/5045msec) 00:46:34.713 slat (nsec): min=7147, max=94247, avg=14188.28, stdev=2950.81 00:46:34.713 clat (usec): min=5299, max=51975, avg=11304.28, stdev=5344.25 00:46:34.713 lat (usec): min=5313, max=51988, avg=11318.47, stdev=5344.25 00:46:34.713 clat percentiles (usec): 00:46:34.713 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[ 9372], 00:46:34.713 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:46:34.713 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13042], 95.00th=[13960], 00:46:34.713 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[52167], 00:46:34.713 | 99.99th=[52167] 00:46:34.713 bw ( KiB/s): min=29952, max=36096, per=33.22%, avg=34073.60, stdev=1951.32, samples=10 00:46:34.713 iops : min= 234, max= 282, avg=266.20, stdev=15.24, samples=10 00:46:34.713 lat (msec) : 10=35.11%, 20=63.17%, 50=0.83%, 100=0.90% 00:46:34.713 cpu : usr=94.83%, sys=4.82%, ctx=11, majf=0, minf=2751 00:46:34.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:34.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 issued rwts: total=1333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:34.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:34.713 filename0: (groupid=0, jobs=1): err= 0: pid=20939: Wed Nov 6 15:51:01 2024 00:46:34.713 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(178MiB/5006msec) 00:46:34.713 slat (nsec): min=7121, max=75695, avg=13912.10, stdev=2417.12 00:46:34.713 clat (usec): min=5798, max=52781, avg=10558.18, stdev=5354.00 00:46:34.713 lat (usec): min=5812, max=52794, avg=10572.09, stdev=5353.98 00:46:34.713 clat percentiles (usec): 00:46:34.713 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 8979], 00:46:34.713 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:46:34.713 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12256], 00:46:34.713 | 99.00th=[49546], 99.50th=[51119], 99.90th=[51643], 99.95th=[52691], 00:46:34.713 | 99.99th=[52691] 00:46:34.713 bw ( KiB/s): min=33792, max=39168, per=35.39%, avg=36300.80, stdev=1937.65, samples=10 00:46:34.713 iops : min= 264, max= 306, avg=283.60, stdev=15.14, samples=10 00:46:34.713 lat (msec) : 10=53.80%, 20=44.51%, 50=0.85%, 100=0.85% 00:46:34.713 cpu : usr=94.61%, sys=5.03%, ctx=8, majf=0, minf=1635 00:46:34.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:34.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:34.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:34.713 filename0: (groupid=0, jobs=1): err= 0: pid=20940: Wed Nov 6 15:51:01 2024 00:46:34.713 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(161MiB/5045msec) 00:46:34.713 slat (nsec): min=7571, max=81840, avg=14021.81, stdev=2449.57 00:46:34.713 clat (usec): min=3885, max=52340, avg=11682.44, stdev=3691.77 00:46:34.713 lat (usec): min=3896, max=52354, avg=11696.46, stdev=3692.11 00:46:34.713 clat percentiles (usec): 00:46:34.713 | 1.00th=[ 4228], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 9634], 00:46:34.713 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12125], 60.00th=[12518], 00:46:34.713 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:46:34.713 | 99.00th=[15401], 99.50th=[48497], 99.90th=[49546], 99.95th=[52167], 00:46:34.713 | 99.99th=[52167] 00:46:34.713 bw ( KiB/s): min=29184, max=36864, per=32.14%, avg=32972.80, stdev=2839.95, samples=10 00:46:34.713 iops : min= 228, max= 288, avg=257.60, stdev=22.19, samples=10 00:46:34.713 lat (msec) : 4=0.16%, 10=24.57%, 20=74.65%, 50=0.54%, 100=0.08% 00:46:34.713 cpu : usr=94.53%, sys=5.08%, ctx=7, majf=0, minf=2815 00:46:34.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:34.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:34.713 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:34.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:34.713 00:46:34.713 Run status group 0 (all jobs): 00:46:34.713 READ: bw=100MiB/s (105MB/s), 32.0MiB/s-35.5MiB/s (33.5MB/s-37.2MB/s), io=505MiB (530MB), run=5006-5045msec 00:46:35.281 ----------------------------------------------------- 00:46:35.281 Suppressions used: 00:46:35.281 count bytes template 00:46:35.281 5 44 /usr/src/fio/parse.c 00:46:35.281 1 8 libtcmalloc_minimal.so 00:46:35.281 1 904 libcrypto.so 00:46:35.281 ----------------------------------------------------- 00:46:35.281 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.281 bdev_null0 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.281 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 [2024-11-06 15:51:02.936276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 bdev_null1 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 bdev_null2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:35.540 { 00:46:35.540 "params": { 00:46:35.540 "name": "Nvme$subsystem", 00:46:35.540 "trtype": "$TEST_TRANSPORT", 00:46:35.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:35.540 "adrfam": "ipv4", 00:46:35.540 "trsvcid": "$NVMF_PORT", 00:46:35.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:35.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:35.540 "hdgst": ${hdgst:-false}, 00:46:35.540 "ddgst": ${ddgst:-false} 00:46:35.540 }, 00:46:35.540 "method": "bdev_nvme_attach_controller" 00:46:35.540 } 00:46:35.540 EOF 00:46:35.540 )") 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:35.540 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:35.541 { 00:46:35.541 "params": { 00:46:35.541 "name": "Nvme$subsystem", 00:46:35.541 "trtype": "$TEST_TRANSPORT", 00:46:35.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:35.541 "adrfam": "ipv4", 00:46:35.541 "trsvcid": "$NVMF_PORT", 00:46:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:35.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:35.541 "hdgst": ${hdgst:-false}, 00:46:35.541 "ddgst": ${ddgst:-false} 00:46:35.541 }, 00:46:35.541 "method": "bdev_nvme_attach_controller" 00:46:35.541 } 00:46:35.541 EOF 00:46:35.541 )") 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:35.541 { 00:46:35.541 "params": { 00:46:35.541 "name": "Nvme$subsystem", 00:46:35.541 "trtype": "$TEST_TRANSPORT", 00:46:35.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:35.541 "adrfam": "ipv4", 00:46:35.541 "trsvcid": "$NVMF_PORT", 00:46:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:35.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:35.541 "hdgst": ${hdgst:-false}, 00:46:35.541 "ddgst": ${ddgst:-false} 00:46:35.541 }, 00:46:35.541 "method": "bdev_nvme_attach_controller" 00:46:35.541 } 00:46:35.541 EOF 00:46:35.541 )") 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:35.541 "params": { 00:46:35.541 "name": "Nvme0", 00:46:35.541 "trtype": "tcp", 00:46:35.541 "traddr": "10.0.0.2", 00:46:35.541 "adrfam": "ipv4", 00:46:35.541 "trsvcid": "4420", 00:46:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:35.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:35.541 "hdgst": false, 00:46:35.541 "ddgst": false 00:46:35.541 }, 00:46:35.541 "method": "bdev_nvme_attach_controller" 00:46:35.541 },{ 00:46:35.541 "params": { 00:46:35.541 "name": "Nvme1", 00:46:35.541 "trtype": "tcp", 00:46:35.541 "traddr": "10.0.0.2", 00:46:35.541 "adrfam": "ipv4", 00:46:35.541 "trsvcid": "4420", 00:46:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:35.541 "hdgst": false, 00:46:35.541 "ddgst": false 00:46:35.541 }, 00:46:35.541 "method": "bdev_nvme_attach_controller" 00:46:35.541 },{ 00:46:35.541 "params": { 00:46:35.541 "name": "Nvme2", 00:46:35.541 "trtype": "tcp", 00:46:35.541 "traddr": "10.0.0.2", 00:46:35.541 "adrfam": "ipv4", 00:46:35.541 "trsvcid": "4420", 00:46:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:35.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:35.541 "hdgst": false, 00:46:35.541 "ddgst": false 00:46:35.541 }, 00:46:35.541 "method": "bdev_nvme_attach_controller" 00:46:35.541 }' 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:35.541 15:51:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:35.799 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:35.799 ... 00:46:35.799 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:35.799 ... 00:46:35.799 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:35.799 ... 00:46:35.799 fio-3.35 00:46:35.799 Starting 24 threads 00:46:47.996 00:46:47.996 filename0: (groupid=0, jobs=1): err= 0: pid=22387: Wed Nov 6 15:51:14 2024 00:46:47.996 read: IOPS=445, BW=1783KiB/s (1825kB/s)(17.4MiB/10017msec) 00:46:47.996 slat (nsec): min=8851, max=45629, avg=16260.15, stdev=6456.22 00:46:47.996 clat (msec): min=20, max=101, avg=35.77, stdev= 4.16 00:46:47.996 lat (msec): min=20, max=101, avg=35.78, stdev= 4.16 00:46:47.996 clat percentiles (msec): 00:46:47.996 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.996 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.996 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.996 | 99.00th=[ 37], 99.50th=[ 50], 99.90th=[ 103], 99.95th=[ 103], 00:46:47.996 | 99.99th=[ 103] 00:46:47.996 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1779.20, stdev=57.24, samples=20 00:46:47.996 iops : min= 384, max= 448, avg=444.80, stdev=14.31, samples=20 00:46:47.996 lat (msec) : 50=99.51%, 100=0.13%, 250=0.36% 00:46:47.996 cpu : usr=98.54%, sys=1.06%, ctx=9, majf=0, minf=1631 00:46:47.996 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.996 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.996 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.996 filename0: (groupid=0, jobs=1): err= 0: pid=22389: Wed Nov 6 15:51:14 2024 00:46:47.996 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10005msec) 00:46:47.996 slat (nsec): min=8871, max=55360, avg=27847.24, stdev=8292.07 00:46:47.996 clat (usec): min=21125, max=93558, avg=35601.12, stdev=3601.88 00:46:47.996 lat (usec): min=21141, max=93587, avg=35628.97, stdev=3601.09 00:46:47.996 clat percentiles (usec): 00:46:47.996 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.996 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.997 | 70.00th=[35390], 80.00th=[35390], 90.00th=[35914], 95.00th=[35914], 00:46:47.997 | 99.00th=[36439], 99.50th=[36963], 99.90th=[93848], 99.95th=[93848], 00:46:47.997 | 99.99th=[93848] 00:46:47.997 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.997 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.997 lat (msec) : 50=99.64%, 100=0.36% 00:46:47.997 cpu : usr=98.33%, sys=1.27%, ctx=13, majf=0, minf=1633 00:46:47.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22390: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=445, BW=1784KiB/s (1827kB/s)(17.4MiB/10005msec) 00:46:47.997 slat (usec): min=10, max=145, avg=42.43, stdev=10.57 00:46:47.997 clat (msec): min=15, max=106, avg=35.49, stdev= 4.59 00:46:47.997 lat (msec): min=15, max=106, avg=35.53, stdev= 4.59 00:46:47.997 clat percentiles (msec): 00:46:47.997 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:46:47.997 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.997 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.997 | 99.00th=[ 38], 99.50th=[ 50], 99.90th=[ 107], 99.95th=[ 107], 00:46:47.997 | 99.99th=[ 107] 00:46:47.997 bw ( KiB/s): min= 1410, max= 1920, per=4.14%, avg=1777.79, stdev=93.91, samples=19 00:46:47.997 iops : min= 352, max= 480, avg=444.42, stdev=23.59, samples=19 00:46:47.997 lat (msec) : 20=0.36%, 50=99.15%, 100=0.13%, 250=0.36% 00:46:47.997 cpu : usr=98.65%, sys=0.90%, ctx=16, majf=0, minf=1635 00:46:47.997 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22391: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=445, BW=1783KiB/s (1826kB/s)(17.4MiB/10014msec) 00:46:47.997 slat (nsec): min=5385, max=56944, avg=22267.60, stdev=6638.06 00:46:47.997 clat (msec): min=14, max=113, avg=35.68, stdev= 4.88 00:46:47.997 lat (msec): min=14, max=113, avg=35.70, stdev= 4.88 00:46:47.997 clat percentiles (msec): 00:46:47.997 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.997 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.997 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.997 | 99.00th=[ 37], 99.50th=[ 38], 99.90th=[ 114], 99.95th=[ 114], 00:46:47.997 | 99.99th=[ 114] 00:46:47.997 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1778.53, stdev=103.59, samples=19 00:46:47.997 iops : min= 352, max= 480, avg=444.63, stdev=25.90, samples=19 00:46:47.997 lat (msec) : 20=0.36%, 50=99.28%, 250=0.36% 00:46:47.997 cpu : usr=98.16%, sys=1.42%, ctx=16, majf=0, minf=1635 00:46:47.997 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22392: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10007msec) 00:46:47.997 slat (nsec): min=5411, max=46417, avg=22997.26, stdev=6752.68 00:46:47.997 clat (usec): min=20433, max=63244, avg=35527.58, stdev=2256.74 00:46:47.997 lat (usec): min=20449, max=63265, avg=35550.57, stdev=2256.20 00:46:47.997 clat percentiles (usec): 00:46:47.997 | 1.00th=[29230], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.997 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.997 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.997 | 99.00th=[36963], 99.50th=[50070], 99.90th=[63177], 99.95th=[63177], 00:46:47.997 | 99.99th=[63177] 00:46:47.997 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1785.26, stdev=29.37, samples=19 00:46:47.997 iops : min= 416, max= 448, avg=446.32, stdev= 7.34, samples=19 00:46:47.997 lat (msec) : 50=99.42%, 100=0.58% 00:46:47.997 cpu : usr=98.53%, sys=1.07%, ctx=18, majf=0, minf=1633 00:46:47.997 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22394: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=445, BW=1784KiB/s (1826kB/s)(17.4MiB/10011msec) 00:46:47.997 slat (nsec): min=5691, max=74234, avg=22078.41, stdev=6538.94 00:46:47.997 clat (msec): min=15, max=112, avg=35.68, stdev= 4.79 00:46:47.997 lat (msec): min=15, max=112, avg=35.70, stdev= 4.79 00:46:47.997 clat percentiles (msec): 00:46:47.997 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.997 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.997 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.997 | 99.00th=[ 37], 99.50th=[ 38], 99.90th=[ 113], 99.95th=[ 113], 00:46:47.997 | 99.99th=[ 113] 00:46:47.997 bw ( KiB/s): min= 1410, max= 1920, per=4.15%, avg=1778.63, stdev=103.19, samples=19 00:46:47.997 iops : min= 352, max= 480, avg=444.63, stdev=25.90, samples=19 00:46:47.997 lat (msec) : 20=0.36%, 50=99.28%, 250=0.36% 00:46:47.997 cpu : usr=98.38%, sys=1.22%, ctx=14, majf=0, minf=1632 00:46:47.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22395: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10002msec) 00:46:47.997 slat (nsec): min=5442, max=53768, avg=26984.59, stdev=8350.98 00:46:47.997 clat (usec): min=21116, max=89924, avg=35591.44, stdev=3380.19 00:46:47.997 lat (usec): min=21133, max=89943, avg=35618.43, stdev=3379.22 00:46:47.997 clat percentiles (usec): 00:46:47.997 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.997 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.997 | 70.00th=[35390], 80.00th=[35390], 90.00th=[35914], 95.00th=[35914], 00:46:47.997 | 99.00th=[36439], 99.50th=[36963], 99.90th=[89654], 99.95th=[89654], 00:46:47.997 | 99.99th=[89654] 00:46:47.997 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.997 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.997 lat (msec) : 50=99.64%, 100=0.36% 00:46:47.997 cpu : usr=98.07%, sys=1.52%, ctx=18, majf=0, minf=1634 00:46:47.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename0: (groupid=0, jobs=1): err= 0: pid=22396: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10005msec) 00:46:47.997 slat (nsec): min=4025, max=53649, avg=27580.50, stdev=8393.57 00:46:47.997 clat (msec): min=21, max=104, avg=35.60, stdev= 3.66 00:46:47.997 lat (msec): min=21, max=104, avg=35.63, stdev= 3.66 00:46:47.997 clat percentiles (msec): 00:46:47.997 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.997 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.997 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.997 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 93], 99.95th=[ 93], 00:46:47.997 | 99.99th=[ 105] 00:46:47.997 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.997 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.997 lat (msec) : 50=99.64%, 100=0.31%, 250=0.04% 00:46:47.997 cpu : usr=98.39%, sys=1.20%, ctx=18, majf=0, minf=1633 00:46:47.997 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.997 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.997 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.997 filename1: (groupid=0, jobs=1): err= 0: pid=22397: Wed Nov 6 15:51:14 2024 00:46:47.997 read: IOPS=452, BW=1811KiB/s (1855kB/s)(17.7MiB/10009msec) 00:46:47.997 slat (nsec): min=4248, max=73406, avg=19040.58, stdev=9867.61 00:46:47.997 clat (msec): min=15, max=126, avg=35.24, stdev= 6.10 00:46:47.997 lat (msec): min=15, max=126, avg=35.26, stdev= 6.10 00:46:47.997 clat percentiles (msec): 00:46:47.997 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 32], 00:46:47.997 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.997 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 43], 00:46:47.997 | 99.00th=[ 45], 99.50th=[ 50], 99.90th=[ 111], 99.95th=[ 111], 00:46:47.997 | 99.99th=[ 127] 00:46:47.997 bw ( KiB/s): min= 1424, max= 1904, per=4.21%, avg=1807.16, stdev=99.27, samples=19 00:46:47.997 iops : min= 356, max= 476, avg=451.79, stdev=24.82, samples=19 00:46:47.998 lat (msec) : 20=0.35%, 50=99.29%, 250=0.35% 00:46:47.998 cpu : usr=98.30%, sys=1.30%, ctx=15, majf=0, minf=1633 00:46:47.998 IO depths : 1=0.8%, 2=1.9%, 4=5.8%, 8=76.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=89.7%, 8=7.9%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22398: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10004msec) 00:46:47.998 slat (nsec): min=5130, max=46370, avg=23257.03, stdev=6600.51 00:46:47.998 clat (usec): min=13692, max=94802, avg=35639.90, stdev=3408.47 00:46:47.998 lat (usec): min=13703, max=94824, avg=35663.15, stdev=3407.63 00:46:47.998 clat percentiles (usec): 00:46:47.998 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.998 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.998 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.998 | 99.00th=[36963], 99.50th=[36963], 99.90th=[87557], 99.95th=[87557], 00:46:47.998 | 99.99th=[94897] 00:46:47.998 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1785.26, stdev=67.11, samples=19 00:46:47.998 iops : min= 384, max= 480, avg=446.32, stdev=16.78, samples=19 00:46:47.998 lat (msec) : 20=0.04%, 50=99.51%, 100=0.45% 00:46:47.998 cpu : usr=98.29%, sys=1.29%, ctx=19, majf=0, minf=1633 00:46:47.998 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22400: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=452, BW=1808KiB/s (1852kB/s)(17.7MiB/10033msec) 00:46:47.998 slat (nsec): min=6694, max=53629, avg=24927.51, stdev=8775.93 00:46:47.998 clat (usec): min=2310, max=51818, avg=35129.42, stdev=3436.90 00:46:47.998 lat (usec): min=2319, max=51852, avg=35154.34, stdev=3437.85 00:46:47.998 clat percentiles (usec): 00:46:47.998 | 1.00th=[ 7635], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.998 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.998 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.998 | 99.00th=[36439], 99.50th=[38536], 99.90th=[51643], 99.95th=[51643], 00:46:47.998 | 99.99th=[51643] 00:46:47.998 bw ( KiB/s): min= 1792, max= 2052, per=4.23%, avg=1812.42, stdev=65.00, samples=19 00:46:47.998 iops : min= 448, max= 513, avg=453.11, stdev=16.25, samples=19 00:46:47.998 lat (msec) : 4=0.04%, 10=1.01%, 50=98.59%, 100=0.35% 00:46:47.998 cpu : usr=98.34%, sys=1.26%, ctx=14, majf=0, minf=1634 00:46:47.998 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22401: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10009msec) 00:46:47.998 slat (nsec): min=5707, max=50620, avg=21892.21, stdev=6509.39 00:46:47.998 clat (usec): min=20284, max=58839, avg=35562.60, stdev=1935.78 00:46:47.998 lat (usec): min=20302, max=58859, avg=35584.49, stdev=1935.21 00:46:47.998 clat percentiles (usec): 00:46:47.998 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.998 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.998 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.998 | 99.00th=[38536], 99.50th=[50070], 99.90th=[55313], 99.95th=[55313], 00:46:47.998 | 99.99th=[58983] 00:46:47.998 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1792.00, stdev=73.90, samples=19 00:46:47.998 iops : min= 416, max= 480, avg=448.00, stdev=18.48, samples=19 00:46:47.998 lat (msec) : 50=99.49%, 100=0.51% 00:46:47.998 cpu : usr=98.36%, sys=1.24%, ctx=16, majf=0, minf=1635 00:46:47.998 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22402: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=445, BW=1783KiB/s (1826kB/s)(17.4MiB/10013msec) 00:46:47.998 slat (nsec): min=4088, max=53087, avg=27730.48, stdev=7977.90 00:46:47.998 clat (msec): min=21, max=100, avg=35.64, stdev= 4.01 00:46:47.998 lat (msec): min=21, max=100, avg=35.67, stdev= 4.01 00:46:47.998 clat percentiles (msec): 00:46:47.998 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.998 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.998 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.998 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 102], 99.95th=[ 102], 00:46:47.998 | 99.99th=[ 102] 00:46:47.998 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.998 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.998 lat (msec) : 50=99.64%, 250=0.36% 00:46:47.998 cpu : usr=98.34%, sys=1.25%, ctx=15, majf=0, minf=1635 00:46:47.998 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22403: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10016msec) 00:46:47.998 slat (nsec): min=8320, max=46272, avg=20077.08, stdev=6240.13 00:46:47.998 clat (usec): min=25666, max=70476, avg=35600.15, stdev=2253.75 00:46:47.998 lat (usec): min=25693, max=70507, avg=35620.23, stdev=2253.12 00:46:47.998 clat percentiles (usec): 00:46:47.998 | 1.00th=[33817], 5.00th=[35390], 10.00th=[35390], 20.00th=[35390], 00:46:47.998 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.998 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.998 | 99.00th=[36439], 99.50th=[37487], 99.90th=[70779], 99.95th=[70779], 00:46:47.998 | 99.99th=[70779] 00:46:47.998 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1785.26, stdev=29.37, samples=19 00:46:47.998 iops : min= 416, max= 448, avg=446.32, stdev= 7.34, samples=19 00:46:47.998 lat (msec) : 50=99.64%, 100=0.36% 00:46:47.998 cpu : usr=98.32%, sys=1.27%, ctx=16, majf=0, minf=1635 00:46:47.998 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22404: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=445, BW=1784KiB/s (1827kB/s)(17.4MiB/10010msec) 00:46:47.998 slat (nsec): min=6777, max=48816, avg=22587.01, stdev=7350.51 00:46:47.998 clat (usec): min=15527, max=97946, avg=35654.10, stdev=4241.18 00:46:47.998 lat (usec): min=15537, max=97970, avg=35676.68, stdev=4240.76 00:46:47.998 clat percentiles (usec): 00:46:47.998 | 1.00th=[21365], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.998 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.998 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.998 | 99.00th=[49546], 99.50th=[50070], 99.90th=[98042], 99.95th=[98042], 00:46:47.998 | 99.99th=[98042] 00:46:47.998 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1778.68, stdev=58.04, samples=19 00:46:47.998 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.998 lat (msec) : 20=0.36%, 50=98.75%, 100=0.90% 00:46:47.998 cpu : usr=98.38%, sys=1.21%, ctx=15, majf=0, minf=1635 00:46:47.998 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:46:47.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.998 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.998 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.998 filename1: (groupid=0, jobs=1): err= 0: pid=22405: Wed Nov 6 15:51:14 2024 00:46:47.998 read: IOPS=446, BW=1784KiB/s (1827kB/s)(17.4MiB/10008msec) 00:46:47.998 slat (nsec): min=8418, max=73769, avg=21846.11, stdev=7112.28 00:46:47.998 clat (msec): min=15, max=107, avg=35.66, stdev= 4.50 00:46:47.998 lat (msec): min=15, max=107, avg=35.68, stdev= 4.50 00:46:47.998 clat percentiles (msec): 00:46:47.998 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.998 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.998 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.998 | 99.00th=[ 37], 99.50th=[ 38], 99.90th=[ 108], 99.95th=[ 108], 00:46:47.998 | 99.99th=[ 108] 00:46:47.998 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1778.53, stdev=94.40, samples=19 00:46:47.998 iops : min= 352, max= 480, avg=444.63, stdev=23.60, samples=19 00:46:47.998 lat (msec) : 20=0.36%, 50=99.24%, 100=0.04%, 250=0.36% 00:46:47.998 cpu : usr=98.32%, sys=1.26%, ctx=13, majf=0, minf=1635 00:46:47.998 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22407: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=445, BW=1783KiB/s (1826kB/s)(17.4MiB/10012msec) 00:46:47.999 slat (nsec): min=4039, max=74440, avg=21609.35, stdev=7056.00 00:46:47.999 clat (msec): min=25, max=103, avg=35.70, stdev= 3.57 00:46:47.999 lat (msec): min=25, max=103, avg=35.72, stdev= 3.56 00:46:47.999 clat percentiles (msec): 00:46:47.999 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.999 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.999 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.999 | 99.00th=[ 37], 99.50th=[ 38], 99.90th=[ 93], 99.95th=[ 93], 00:46:47.999 | 99.99th=[ 105] 00:46:47.999 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.999 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.999 lat (msec) : 50=99.64%, 100=0.31%, 250=0.04% 00:46:47.999 cpu : usr=98.17%, sys=1.41%, ctx=14, majf=0, minf=1634 00:46:47.999 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22408: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=445, BW=1783KiB/s (1826kB/s)(17.4MiB/10013msec) 00:46:47.999 slat (nsec): min=3973, max=57366, avg=27949.22, stdev=8276.90 00:46:47.999 clat (msec): min=21, max=100, avg=35.64, stdev= 4.07 00:46:47.999 lat (msec): min=21, max=100, avg=35.66, stdev= 4.07 00:46:47.999 clat percentiles (msec): 00:46:47.999 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:46:47.999 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:47.999 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:47.999 | 99.00th=[ 37], 99.50th=[ 46], 99.90th=[ 102], 99.95th=[ 102], 00:46:47.999 | 99.99th=[ 102] 00:46:47.999 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.999 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.999 lat (msec) : 50=99.64%, 250=0.36% 00:46:47.999 cpu : usr=98.41%, sys=1.18%, ctx=22, majf=0, minf=1633 00:46:47.999 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22409: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=446, BW=1785KiB/s (1827kB/s)(17.4MiB/10006msec) 00:46:47.999 slat (nsec): min=4728, max=47581, avg=22893.30, stdev=6375.98 00:46:47.999 clat (usec): min=20454, max=90738, avg=35657.71, stdev=3432.43 00:46:47.999 lat (usec): min=20482, max=90757, avg=35680.61, stdev=3431.48 00:46:47.999 clat percentiles (usec): 00:46:47.999 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.999 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.999 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.999 | 99.00th=[36963], 99.50th=[36963], 99.90th=[90702], 99.95th=[90702], 00:46:47.999 | 99.99th=[90702] 00:46:47.999 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1785.26, stdev=67.11, samples=19 00:46:47.999 iops : min= 384, max= 480, avg=446.32, stdev=16.78, samples=19 00:46:47.999 lat (msec) : 50=99.64%, 100=0.36% 00:46:47.999 cpu : usr=98.27%, sys=1.32%, ctx=15, majf=0, minf=1635 00:46:47.999 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22410: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=452, BW=1811KiB/s (1854kB/s)(17.7MiB/10001msec) 00:46:47.999 slat (nsec): min=8812, max=56688, avg=22102.73, stdev=8939.99 00:46:47.999 clat (usec): min=6002, max=58308, avg=35162.60, stdev=3364.11 00:46:47.999 lat (usec): min=6011, max=58327, avg=35184.71, stdev=3364.58 00:46:47.999 clat percentiles (usec): 00:46:47.999 | 1.00th=[13042], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.999 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.999 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.999 | 99.00th=[36963], 99.50th=[39060], 99.90th=[51643], 99.95th=[51643], 00:46:47.999 | 99.99th=[58459] 00:46:47.999 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1812.21, stdev=64.19, samples=19 00:46:47.999 iops : min= 448, max= 512, avg=453.05, stdev=16.05, samples=19 00:46:47.999 lat (msec) : 10=0.66%, 20=0.75%, 50=98.23%, 100=0.35% 00:46:47.999 cpu : usr=98.19%, sys=1.40%, ctx=15, majf=0, minf=1636 00:46:47.999 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22412: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.8MiB/10002msec) 00:46:47.999 slat (nsec): min=5506, max=43731, avg=14267.98, stdev=5863.84 00:46:47.999 clat (usec): min=5799, max=58856, avg=35089.13, stdev=4170.62 00:46:47.999 lat (usec): min=5818, max=58875, avg=35103.40, stdev=4170.76 00:46:47.999 clat percentiles (usec): 00:46:47.999 | 1.00th=[ 6194], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.999 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.999 | 70.00th=[35390], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.999 | 99.00th=[37487], 99.50th=[45351], 99.90th=[58983], 99.95th=[58983], 00:46:47.999 | 99.99th=[58983] 00:46:47.999 bw ( KiB/s): min= 1792, max= 2176, per=4.24%, avg=1818.95, stdev=91.30, samples=19 00:46:47.999 iops : min= 448, max= 544, avg=454.74, stdev=22.83, samples=19 00:46:47.999 lat (msec) : 10=1.41%, 20=0.70%, 50=97.54%, 100=0.35% 00:46:47.999 cpu : usr=98.40%, sys=1.18%, ctx=15, majf=0, minf=1634 00:46:47.999 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22413: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=446, BW=1784KiB/s (1827kB/s)(17.4MiB/10007msec) 00:46:47.999 slat (nsec): min=9047, max=62875, avg=28082.99, stdev=8052.99 00:46:47.999 clat (usec): min=21174, max=95223, avg=35612.00, stdev=3683.57 00:46:47.999 lat (usec): min=21192, max=95257, avg=35640.09, stdev=3682.76 00:46:47.999 clat percentiles (usec): 00:46:47.999 | 1.00th=[34866], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.999 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.999 | 70.00th=[35390], 80.00th=[35390], 90.00th=[35914], 95.00th=[35914], 00:46:47.999 | 99.00th=[36439], 99.50th=[36963], 99.90th=[94897], 99.95th=[94897], 00:46:47.999 | 99.99th=[94897] 00:46:47.999 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1778.53, stdev=58.73, samples=19 00:46:47.999 iops : min= 384, max= 448, avg=444.63, stdev=14.68, samples=19 00:46:47.999 lat (msec) : 50=99.64%, 100=0.36% 00:46:47.999 cpu : usr=98.42%, sys=1.15%, ctx=15, majf=0, minf=1635 00:46:47.999 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22414: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=454, BW=1817KiB/s (1860kB/s)(17.8MiB/10004msec) 00:46:47.999 slat (nsec): min=8834, max=56905, avg=16919.84, stdev=8251.69 00:46:47.999 clat (usec): min=5752, max=51770, avg=35082.59, stdev=3829.75 00:46:47.999 lat (usec): min=5763, max=51802, avg=35099.51, stdev=3829.60 00:46:47.999 clat percentiles (usec): 00:46:47.999 | 1.00th=[ 7701], 5.00th=[34866], 10.00th=[35390], 20.00th=[35390], 00:46:47.999 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:46:47.999 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[35914], 00:46:47.999 | 99.00th=[36963], 99.50th=[36963], 99.90th=[51643], 99.95th=[51643], 00:46:47.999 | 99.99th=[51643] 00:46:47.999 bw ( KiB/s): min= 1792, max= 2176, per=4.24%, avg=1818.95, stdev=91.30, samples=19 00:46:47.999 iops : min= 448, max= 544, avg=454.74, stdev=22.83, samples=19 00:46:47.999 lat (msec) : 10=1.06%, 20=0.70%, 50=97.89%, 100=0.35% 00:46:47.999 cpu : usr=98.17%, sys=1.43%, ctx=16, majf=0, minf=1639 00:46:47.999 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:47.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:47.999 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:47.999 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:47.999 filename2: (groupid=0, jobs=1): err= 0: pid=22415: Wed Nov 6 15:51:14 2024 00:46:47.999 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.5MiB/10014msec) 00:46:47.999 slat (nsec): min=4761, max=84912, avg=42664.51, stdev=10288.75 00:46:47.999 clat (msec): min=15, max=110, avg=35.49, stdev= 3.98 00:46:47.999 lat (msec): min=15, max=110, avg=35.53, stdev= 3.98 00:46:47.999 clat percentiles (msec): 00:46:47.999 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 35], 00:46:48.000 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:46:48.000 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 36], 00:46:48.000 | 99.00th=[ 43], 99.50th=[ 54], 99.90th=[ 90], 99.95th=[ 90], 00:46:48.000 | 99.99th=[ 111] 00:46:48.000 bw ( KiB/s): min= 1523, max= 1920, per=4.15%, avg=1780.95, stdev=67.89, samples=20 00:46:48.000 iops : min= 380, max= 480, avg=445.20, stdev=17.12, samples=20 00:46:48.000 lat (msec) : 20=0.36%, 50=99.02%, 100=0.58%, 250=0.04% 00:46:48.000 cpu : usr=98.37%, sys=1.18%, ctx=15, majf=0, minf=1633 00:46:48.000 IO depths : 1=4.8%, 2=11.1%, 4=25.1%, 8=51.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:46:48.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:48.000 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:48.000 issued rwts: total=4468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:48.000 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:48.000 00:46:48.000 Run status group 0 (all jobs): 00:46:48.000 READ: bw=41.9MiB/s (43.9MB/s), 1783KiB/s-1817KiB/s (1825kB/s-1861kB/s), io=420MiB (441MB), run=10001-10033msec 00:46:48.568 ----------------------------------------------------- 00:46:48.568 Suppressions used: 00:46:48.568 count bytes template 00:46:48.568 45 402 /usr/src/fio/parse.c 00:46:48.568 1 8 libtcmalloc_minimal.so 00:46:48.568 1 904 libcrypto.so 00:46:48.568 ----------------------------------------------------- 00:46:48.568 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 bdev_null0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 [2024-11-06 15:51:16.140290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 bdev_null1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:46:48.568 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:48.569 { 00:46:48.569 "params": { 00:46:48.569 "name": "Nvme$subsystem", 00:46:48.569 "trtype": "$TEST_TRANSPORT", 00:46:48.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:48.569 "adrfam": "ipv4", 00:46:48.569 "trsvcid": "$NVMF_PORT", 00:46:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:48.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:48.569 "hdgst": ${hdgst:-false}, 00:46:48.569 "ddgst": ${ddgst:-false} 00:46:48.569 }, 00:46:48.569 "method": "bdev_nvme_attach_controller" 00:46:48.569 } 00:46:48.569 EOF 00:46:48.569 )") 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:48.569 { 00:46:48.569 "params": { 00:46:48.569 "name": "Nvme$subsystem", 00:46:48.569 "trtype": "$TEST_TRANSPORT", 00:46:48.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:48.569 "adrfam": "ipv4", 00:46:48.569 "trsvcid": "$NVMF_PORT", 00:46:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:48.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:48.569 "hdgst": ${hdgst:-false}, 00:46:48.569 "ddgst": ${ddgst:-false} 00:46:48.569 }, 00:46:48.569 "method": "bdev_nvme_attach_controller" 00:46:48.569 } 00:46:48.569 EOF 00:46:48.569 )") 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:48.569 15:51:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:48.569 "params": { 00:46:48.569 "name": "Nvme0", 00:46:48.569 "trtype": "tcp", 00:46:48.569 "traddr": "10.0.0.2", 00:46:48.569 "adrfam": "ipv4", 00:46:48.569 "trsvcid": "4420", 00:46:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:48.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:48.569 "hdgst": false, 00:46:48.569 "ddgst": false 00:46:48.569 }, 00:46:48.569 "method": "bdev_nvme_attach_controller" 00:46:48.569 },{ 00:46:48.569 "params": { 00:46:48.569 "name": "Nvme1", 00:46:48.569 "trtype": "tcp", 00:46:48.569 "traddr": "10.0.0.2", 00:46:48.569 "adrfam": "ipv4", 00:46:48.569 "trsvcid": "4420", 00:46:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:48.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:48.569 "hdgst": false, 00:46:48.569 "ddgst": false 00:46:48.569 }, 00:46:48.569 "method": "bdev_nvme_attach_controller" 00:46:48.569 }' 00:46:48.828 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:48.828 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:48.828 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:46:48.828 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:48.828 15:51:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:49.087 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:49.087 ... 00:46:49.087 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:49.087 ... 00:46:49.087 fio-3.35 00:46:49.087 Starting 4 threads 00:46:55.654 00:46:55.654 filename0: (groupid=0, jobs=1): err= 0: pid=24904: Wed Nov 6 15:51:22 2024 00:46:55.654 read: IOPS=2324, BW=18.2MiB/s (19.0MB/s)(90.8MiB/5002msec) 00:46:55.654 slat (nsec): min=6055, max=67940, avg=11100.89, stdev=3736.95 00:46:55.654 clat (usec): min=728, max=6506, avg=3406.68, stdev=569.44 00:46:55.654 lat (usec): min=742, max=6522, avg=3417.78, stdev=569.33 00:46:55.654 clat percentiles (usec): 00:46:55.654 | 1.00th=[ 2089], 5.00th=[ 2606], 10.00th=[ 2802], 20.00th=[ 2999], 00:46:55.654 | 30.00th=[ 3163], 40.00th=[ 3326], 50.00th=[ 3425], 60.00th=[ 3458], 00:46:55.654 | 70.00th=[ 3490], 80.00th=[ 3687], 90.00th=[ 4080], 95.00th=[ 4490], 00:46:55.654 | 99.00th=[ 5407], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6259], 00:46:55.654 | 99.99th=[ 6521] 00:46:55.654 bw ( KiB/s): min=17856, max=19264, per=25.39%, avg=18577.78, stdev=522.79, samples=9 00:46:55.654 iops : min= 2232, max= 2408, avg=2322.22, stdev=65.35, samples=9 00:46:55.654 lat (usec) : 750=0.02%, 1000=0.01% 00:46:55.654 lat (msec) : 2=0.74%, 4=88.35%, 10=10.89% 00:46:55.654 cpu : usr=96.30%, sys=3.30%, ctx=9, majf=0, minf=1632 00:46:55.654 IO depths : 1=0.3%, 2=6.7%, 4=64.1%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:55.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 issued rwts: total=11628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:55.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:55.654 filename0: (groupid=0, jobs=1): err= 0: pid=24905: Wed Nov 6 15:51:22 2024 00:46:55.654 read: IOPS=2262, BW=17.7MiB/s (18.5MB/s)(88.4MiB/5001msec) 00:46:55.654 slat (nsec): min=5755, max=66061, avg=11065.43, stdev=3767.83 00:46:55.654 clat (usec): min=696, max=8202, avg=3501.79, stdev=592.94 00:46:55.654 lat (usec): min=710, max=8224, avg=3512.85, stdev=592.74 00:46:55.654 clat percentiles (usec): 00:46:55.654 | 1.00th=[ 2311], 5.00th=[ 2737], 10.00th=[ 2900], 20.00th=[ 3130], 00:46:55.654 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3490], 00:46:55.654 | 70.00th=[ 3556], 80.00th=[ 3785], 90.00th=[ 4228], 95.00th=[ 4686], 00:46:55.654 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6194], 99.95th=[ 6390], 00:46:55.654 | 99.99th=[ 8160] 00:46:55.654 bw ( KiB/s): min=17424, max=18944, per=24.75%, avg=18104.89, stdev=537.62, samples=9 00:46:55.654 iops : min= 2178, max= 2368, avg=2263.11, stdev=67.20, samples=9 00:46:55.654 lat (usec) : 750=0.02%, 1000=0.04% 00:46:55.654 lat (msec) : 2=0.48%, 4=86.11%, 10=13.35% 00:46:55.654 cpu : usr=95.36%, sys=4.24%, ctx=8, majf=0, minf=1632 00:46:55.654 IO depths : 1=0.1%, 2=6.8%, 4=64.3%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:55.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 issued rwts: total=11313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:55.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:55.654 filename1: (groupid=0, jobs=1): err= 0: pid=24906: Wed Nov 6 15:51:22 2024 00:46:55.654 read: IOPS=2379, BW=18.6MiB/s (19.5MB/s)(93.0MiB/5002msec) 00:46:55.654 slat (usec): min=5, max=209, avg=10.66, stdev= 4.04 00:46:55.654 clat (usec): min=699, max=6343, avg=3328.98, stdev=488.91 00:46:55.654 lat (usec): min=713, max=6357, avg=3339.64, stdev=488.87 00:46:55.654 clat percentiles (usec): 00:46:55.654 | 1.00th=[ 2008], 5.00th=[ 2606], 10.00th=[ 2802], 20.00th=[ 2933], 00:46:55.654 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3458], 00:46:55.654 | 70.00th=[ 3490], 80.00th=[ 3589], 90.00th=[ 3818], 95.00th=[ 4146], 00:46:55.654 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 5932], 99.95th=[ 6128], 00:46:55.654 | 99.99th=[ 6325] 00:46:55.654 bw ( KiB/s): min=18192, max=20448, per=26.03%, avg=19041.78, stdev=721.22, samples=9 00:46:55.654 iops : min= 2274, max= 2556, avg=2380.22, stdev=90.15, samples=9 00:46:55.654 lat (usec) : 750=0.01%, 1000=0.01% 00:46:55.654 lat (msec) : 2=0.97%, 4=92.23%, 10=6.79% 00:46:55.654 cpu : usr=96.12%, sys=3.48%, ctx=8, majf=0, minf=1634 00:46:55.654 IO depths : 1=0.3%, 2=7.1%, 4=64.2%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:55.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 issued rwts: total=11901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:55.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:55.654 filename1: (groupid=0, jobs=1): err= 0: pid=24907: Wed Nov 6 15:51:22 2024 00:46:55.654 read: IOPS=2180, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5003msec) 00:46:55.654 slat (nsec): min=6293, max=38073, avg=10787.66, stdev=3661.66 00:46:55.654 clat (usec): min=754, max=6759, avg=3636.30, stdev=546.09 00:46:55.654 lat (usec): min=767, max=6774, avg=3647.08, stdev=545.75 00:46:55.654 clat percentiles (usec): 00:46:55.654 | 1.00th=[ 2507], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3359], 00:46:55.654 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:46:55.654 | 70.00th=[ 3752], 80.00th=[ 3916], 90.00th=[ 4293], 95.00th=[ 4752], 00:46:55.654 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6390], 00:46:55.654 | 99.99th=[ 6718] 00:46:55.654 bw ( KiB/s): min=16688, max=17872, per=23.78%, avg=17399.11, stdev=391.68, samples=9 00:46:55.654 iops : min= 2086, max= 2234, avg=2174.89, stdev=48.96, samples=9 00:46:55.654 lat (usec) : 1000=0.02% 00:46:55.654 lat (msec) : 2=0.28%, 4=82.35%, 10=17.36% 00:46:55.654 cpu : usr=95.98%, sys=3.60%, ctx=9, majf=0, minf=1634 00:46:55.654 IO depths : 1=0.1%, 2=3.4%, 4=68.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:55.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:55.654 issued rwts: total=10907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:55.654 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:55.654 00:46:55.654 Run status group 0 (all jobs): 00:46:55.654 READ: bw=71.4MiB/s (74.9MB/s), 17.0MiB/s-18.6MiB/s (17.9MB/s-19.5MB/s), io=357MiB (375MB), run=5001-5003msec 00:46:56.225 ----------------------------------------------------- 00:46:56.225 Suppressions used: 00:46:56.225 count bytes template 00:46:56.225 6 52 /usr/src/fio/parse.c 00:46:56.225 1 8 libtcmalloc_minimal.so 00:46:56.225 1 904 libcrypto.so 00:46:56.225 ----------------------------------------------------- 00:46:56.225 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.225 00:46:56.225 real 0m28.610s 00:46:56.225 user 4m56.164s 00:46:56.225 sys 0m6.408s 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:56.225 15:51:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:56.225 ************************************ 00:46:56.225 END TEST fio_dif_rand_params 00:46:56.225 ************************************ 00:46:56.485 15:51:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:46:56.485 15:51:23 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:56.485 15:51:23 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:56.485 15:51:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:56.485 ************************************ 00:46:56.485 START TEST fio_dif_digest 00:46:56.485 ************************************ 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:56.485 bdev_null0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:56.485 [2024-11-06 15:51:23.941618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:56.485 { 00:46:56.485 "params": { 00:46:56.485 "name": "Nvme$subsystem", 00:46:56.485 "trtype": "$TEST_TRANSPORT", 00:46:56.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:56.485 "adrfam": "ipv4", 00:46:56.485 "trsvcid": "$NVMF_PORT", 00:46:56.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:56.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:56.485 "hdgst": ${hdgst:-false}, 00:46:56.485 "ddgst": ${ddgst:-false} 00:46:56.485 }, 00:46:56.485 "method": "bdev_nvme_attach_controller" 00:46:56.485 } 00:46:56.485 EOF 00:46:56.485 )") 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:56.485 "params": { 00:46:56.485 "name": "Nvme0", 00:46:56.485 "trtype": "tcp", 00:46:56.485 "traddr": "10.0.0.2", 00:46:56.485 "adrfam": "ipv4", 00:46:56.485 "trsvcid": "4420", 00:46:56.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.485 "hdgst": true, 00:46:56.485 "ddgst": true 00:46:56.485 }, 00:46:56.485 "method": "bdev_nvme_attach_controller" 00:46:56.485 }' 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # break 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:56.485 15:51:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:56.745 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:56.745 ... 00:46:56.745 fio-3.35 00:46:56.745 Starting 3 threads 00:47:08.953 00:47:08.953 filename0: (groupid=0, jobs=1): err= 0: pid=26181: Wed Nov 6 15:51:35 2024 00:47:08.953 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(311MiB/10049msec) 00:47:08.953 slat (nsec): min=7498, max=30918, avg=13474.64, stdev=1536.68 00:47:08.953 clat (usec): min=9463, max=53779, avg=12097.54, stdev=1414.85 00:47:08.953 lat (usec): min=9476, max=53791, avg=12111.02, stdev=1414.92 00:47:08.953 clat percentiles (usec): 00:47:08.953 | 1.00th=[10159], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:47:08.953 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:47:08.953 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:47:08.953 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16909], 99.95th=[48497], 00:47:08.953 | 99.99th=[53740] 00:47:08.953 bw ( KiB/s): min=29952, max=32768, per=34.99%, avg=31782.40, stdev=725.39, samples=20 00:47:08.953 iops : min= 234, max= 256, avg=248.30, stdev= 5.67, samples=20 00:47:08.953 lat (msec) : 10=0.60%, 20=99.32%, 50=0.04%, 100=0.04% 00:47:08.953 cpu : usr=94.76%, sys=4.90%, ctx=18, majf=0, minf=1636 00:47:08.953 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 issued rwts: total=2485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.953 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:08.953 filename0: (groupid=0, jobs=1): err= 0: pid=26182: Wed Nov 6 15:51:35 2024 00:47:08.953 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(285MiB/10044msec) 00:47:08.953 slat (nsec): min=7675, max=32269, avg=13433.83, stdev=1597.71 00:47:08.953 clat (usec): min=10272, max=50577, avg=13204.60, stdev=1376.80 00:47:08.953 lat (usec): min=10287, max=50589, avg=13218.03, stdev=1376.93 00:47:08.953 clat percentiles (usec): 00:47:08.953 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12125], 20.00th=[12387], 00:47:08.953 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:47:08.953 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14615], 00:47:08.953 | 99.00th=[15664], 99.50th=[15926], 99.90th=[17957], 99.95th=[45876], 00:47:08.953 | 99.99th=[50594] 00:47:08.953 bw ( KiB/s): min=28160, max=30464, per=32.05%, avg=29110.20, stdev=598.36, samples=20 00:47:08.953 iops : min= 220, max= 238, avg=227.40, stdev= 4.64, samples=20 00:47:08.953 lat (msec) : 20=99.91%, 50=0.04%, 100=0.04% 00:47:08.953 cpu : usr=94.73%, sys=4.92%, ctx=16, majf=0, minf=1635 00:47:08.953 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.953 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:08.953 filename0: (groupid=0, jobs=1): err= 0: pid=26183: Wed Nov 6 15:51:35 2024 00:47:08.953 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(296MiB/10046msec) 00:47:08.953 slat (nsec): min=7451, max=88826, avg=13470.60, stdev=2165.83 00:47:08.953 clat (usec): min=9661, max=52989, avg=12686.92, stdev=1388.79 00:47:08.953 lat (usec): min=9674, max=53003, avg=12700.39, stdev=1388.88 00:47:08.953 clat percentiles (usec): 00:47:08.953 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11600], 20.00th=[11994], 00:47:08.953 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:47:08.953 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:47:08.953 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15270], 99.95th=[47973], 00:47:08.953 | 99.99th=[53216] 00:47:08.953 bw ( KiB/s): min=29440, max=31232, per=33.36%, avg=30297.60, stdev=486.26, samples=20 00:47:08.953 iops : min= 230, max= 244, avg=236.70, stdev= 3.80, samples=20 00:47:08.953 lat (msec) : 10=0.13%, 20=99.79%, 50=0.04%, 100=0.04% 00:47:08.953 cpu : usr=94.65%, sys=4.99%, ctx=15, majf=0, minf=1635 00:47:08.953 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:08.953 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:08.953 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:08.953 00:47:08.953 Run status group 0 (all jobs): 00:47:08.953 READ: bw=88.7MiB/s (93.0MB/s), 28.3MiB/s-30.9MiB/s (29.7MB/s-32.4MB/s), io=891MiB (935MB), run=10044-10049msec 00:47:08.953 ----------------------------------------------------- 00:47:08.953 Suppressions used: 00:47:08.953 count bytes template 00:47:08.953 5 44 /usr/src/fio/parse.c 00:47:08.953 1 8 libtcmalloc_minimal.so 00:47:08.953 1 904 libcrypto.so 00:47:08.953 ----------------------------------------------------- 00:47:08.953 00:47:08.953 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:08.953 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:08.954 00:47:08.954 real 0m12.481s 00:47:08.954 user 0m37.345s 00:47:08.954 sys 0m1.977s 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:08.954 15:51:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:08.954 ************************************ 00:47:08.954 END TEST fio_dif_digest 00:47:08.954 ************************************ 00:47:08.954 15:51:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:08.954 15:51:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:08.954 rmmod nvme_tcp 00:47:08.954 rmmod nvme_fabrics 00:47:08.954 rmmod nvme_keyring 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 16217 ']' 00:47:08.954 15:51:36 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 16217 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 16217 ']' 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 16217 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 16217 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 16217' 00:47:08.954 killing process with pid 16217 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@971 -- # kill 16217 00:47:08.954 15:51:36 nvmf_dif -- common/autotest_common.sh@976 -- # wait 16217 00:47:10.331 15:51:37 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:10.331 15:51:37 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:12.866 Waiting for block devices as requested 00:47:12.866 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:47:12.866 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:47:13.125 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:47:13.125 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:47:13.125 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:47:13.383 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:47:13.383 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:47:13.383 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:47:13.383 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:47:13.642 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:47:13.642 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:47:13.642 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:47:13.901 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:47:13.901 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:47:13.901 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:47:13.901 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:47:14.160 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:14.160 15:51:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:14.160 15:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:14.160 15:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:16.697 15:51:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:16.697 00:47:16.697 real 1m23.693s 00:47:16.697 user 7m27.861s 00:47:16.697 sys 0m22.337s 00:47:16.697 15:51:43 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:16.697 15:51:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:16.697 ************************************ 00:47:16.697 END TEST nvmf_dif 00:47:16.697 ************************************ 00:47:16.697 15:51:43 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:16.697 15:51:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:16.697 15:51:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:16.697 15:51:43 -- common/autotest_common.sh@10 -- # set +x 00:47:16.697 ************************************ 00:47:16.697 START TEST nvmf_abort_qd_sizes 00:47:16.697 ************************************ 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:16.697 * Looking for test storage... 00:47:16.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:47:16.697 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:16.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:16.698 --rc genhtml_branch_coverage=1 00:47:16.698 --rc genhtml_function_coverage=1 00:47:16.698 --rc genhtml_legend=1 00:47:16.698 --rc geninfo_all_blocks=1 00:47:16.698 --rc geninfo_unexecuted_blocks=1 00:47:16.698 00:47:16.698 ' 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:16.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:16.698 --rc genhtml_branch_coverage=1 00:47:16.698 --rc genhtml_function_coverage=1 00:47:16.698 --rc genhtml_legend=1 00:47:16.698 --rc geninfo_all_blocks=1 00:47:16.698 --rc geninfo_unexecuted_blocks=1 00:47:16.698 00:47:16.698 ' 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:16.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:16.698 --rc genhtml_branch_coverage=1 00:47:16.698 --rc genhtml_function_coverage=1 00:47:16.698 --rc genhtml_legend=1 00:47:16.698 --rc geninfo_all_blocks=1 00:47:16.698 --rc geninfo_unexecuted_blocks=1 00:47:16.698 00:47:16.698 ' 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:16.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:16.698 --rc genhtml_branch_coverage=1 00:47:16.698 --rc genhtml_function_coverage=1 00:47:16.698 --rc genhtml_legend=1 00:47:16.698 --rc geninfo_all_blocks=1 00:47:16.698 --rc geninfo_unexecuted_blocks=1 00:47:16.698 00:47:16.698 ' 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:16.698 15:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:16.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:47:16.698 15:51:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:47:21.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:47:21.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:47:21.973 Found net devices under 0000:86:00.0: cvl_0_0 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:21.973 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:47:21.974 Found net devices under 0000:86:00.1: cvl_0_1 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:21.974 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:22.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:22.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:47:22.232 00:47:22.232 --- 10.0.0.2 ping statistics --- 00:47:22.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:22.232 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:22.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:22.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:47:22.232 00:47:22.232 --- 10.0.0.1 ping statistics --- 00:47:22.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:22.232 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:47:22.232 15:51:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:25.519 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:47:25.519 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:47:26.897 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=34422 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 34422 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 34422 ']' 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:26.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:47:26.897 15:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:26.897 [2024-11-06 15:51:54.428376] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:47:26.897 [2024-11-06 15:51:54.428468] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:27.155 [2024-11-06 15:51:54.559306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:27.155 [2024-11-06 15:51:54.669523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:27.155 [2024-11-06 15:51:54.669567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:27.155 [2024-11-06 15:51:54.669577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:27.155 [2024-11-06 15:51:54.669588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:27.156 [2024-11-06 15:51:54.669596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:27.156 [2024-11-06 15:51:54.671977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:27.156 [2024-11-06 15:51:54.672059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:27.156 [2024-11-06 15:51:54.672126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:27.156 [2024-11-06 15:51:54.672148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:27.721 15:51:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:27.721 ************************************ 00:47:27.721 START TEST spdk_target_abort 00:47:27.721 ************************************ 00:47:27.721 15:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:47:27.721 15:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:27.721 15:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:47:27.721 15:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:27.721 15:51:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:31.059 spdk_targetn1 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:31.059 [2024-11-06 15:51:58.199097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:31.059 [2024-11-06 15:51:58.242913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:31.059 15:51:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:34.335 Initializing NVMe Controllers 00:47:34.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:34.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:34.335 Initialization complete. Launching workers. 00:47:34.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14803, failed: 0 00:47:34.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 13594 00:47:34.335 success 757, unsuccessful 452, failed 0 00:47:34.335 15:52:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:34.335 15:52:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:37.616 Initializing NVMe Controllers 00:47:37.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:37.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:37.616 Initialization complete. Launching workers. 00:47:37.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8593, failed: 0 00:47:37.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1281, failed to submit 7312 00:47:37.616 success 275, unsuccessful 1006, failed 0 00:47:37.616 15:52:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:37.616 15:52:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:40.904 Initializing NVMe Controllers 00:47:40.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:40.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:40.904 Initialization complete. Launching workers. 00:47:40.904 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33973, failed: 0 00:47:40.904 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2828, failed to submit 31145 00:47:40.904 success 569, unsuccessful 2259, failed 0 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:40.904 15:52:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 34422 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 34422 ']' 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 34422 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 34422 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 34422' 00:47:42.846 killing process with pid 34422 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 34422 00:47:42.846 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 34422 00:47:43.414 00:47:43.414 real 0m15.681s 00:47:43.414 user 1m1.407s 00:47:43.414 sys 0m2.602s 00:47:43.414 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:43.414 15:52:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:43.414 ************************************ 00:47:43.414 END TEST spdk_target_abort 00:47:43.414 ************************************ 00:47:43.414 15:52:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:43.414 15:52:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:43.414 15:52:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:43.414 15:52:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:43.673 ************************************ 00:47:43.673 START TEST kernel_target_abort 00:47:43.673 ************************************ 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:43.673 15:52:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:46.215 Waiting for block devices as requested 00:47:46.215 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:47:46.473 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:47:46.473 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:47:46.473 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:47:46.732 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:47:46.732 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:47:46.732 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:47:46.991 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:47:46.991 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:47:46.991 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:47:46.991 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:47:47.249 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:47:47.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:47:47.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:47:47.508 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:47:47.508 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:47:47.508 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:47:48.077 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:47:48.336 No valid GPT data, bailing 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:47:48.336 00:47:48.336 Discovery Log Number of Records 2, Generation counter 2 00:47:48.336 =====Discovery Log Entry 0====== 00:47:48.336 trtype: tcp 00:47:48.336 adrfam: ipv4 00:47:48.336 subtype: current discovery subsystem 00:47:48.336 treq: not specified, sq flow control disable supported 00:47:48.336 portid: 1 00:47:48.336 trsvcid: 4420 00:47:48.336 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:48.336 traddr: 10.0.0.1 00:47:48.336 eflags: none 00:47:48.336 sectype: none 00:47:48.336 =====Discovery Log Entry 1====== 00:47:48.336 trtype: tcp 00:47:48.336 adrfam: ipv4 00:47:48.336 subtype: nvme subsystem 00:47:48.336 treq: not specified, sq flow control disable supported 00:47:48.336 portid: 1 00:47:48.336 trsvcid: 4420 00:47:48.336 subnqn: nqn.2016-06.io.spdk:testnqn 00:47:48.336 traddr: 10.0.0.1 00:47:48.336 eflags: none 00:47:48.336 sectype: none 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:48.336 15:52:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:51.623 Initializing NVMe Controllers 00:47:51.623 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:51.623 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:51.623 Initialization complete. Launching workers. 00:47:51.623 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82531, failed: 0 00:47:51.623 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 82531, failed to submit 0 00:47:51.623 success 0, unsuccessful 82531, failed 0 00:47:51.623 15:52:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:51.623 15:52:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:54.913 Initializing NVMe Controllers 00:47:54.913 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:54.913 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:54.913 Initialization complete. Launching workers. 00:47:54.913 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 128770, failed: 0 00:47:54.913 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32502, failed to submit 96268 00:47:54.913 success 0, unsuccessful 32502, failed 0 00:47:54.913 15:52:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:54.913 15:52:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:58.202 Initializing NVMe Controllers 00:47:58.202 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:58.202 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:58.202 Initialization complete. Launching workers. 00:47:58.202 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123401, failed: 0 00:47:58.202 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30858, failed to submit 92543 00:47:58.202 success 0, unsuccessful 30858, failed 0 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:47:58.202 15:52:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:48:00.738 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:48:00.738 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:48:02.118 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:48:02.118 00:48:02.118 real 0m18.675s 00:48:02.118 user 0m9.436s 00:48:02.118 sys 0m5.572s 00:48:02.118 15:52:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:02.118 15:52:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:48:02.118 ************************************ 00:48:02.118 END TEST kernel_target_abort 00:48:02.118 ************************************ 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:02.378 rmmod nvme_tcp 00:48:02.378 rmmod nvme_fabrics 00:48:02.378 rmmod nvme_keyring 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:02.378 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 34422 ']' 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 34422 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 34422 ']' 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 34422 00:48:02.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (34422) - No such process 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 34422 is not found' 00:48:02.379 Process with pid 34422 is not found 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:48:02.379 15:52:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:48:04.915 Waiting for block devices as requested 00:48:05.179 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:48:05.179 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:48:05.179 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:48:05.443 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:48:05.443 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:48:05.443 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:48:05.701 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:48:05.701 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:48:05.701 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:48:05.701 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:48:05.960 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:48:05.960 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:48:05.960 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:48:06.219 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:48:06.219 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:48:06.219 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:48:06.219 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:06.478 15:52:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:08.381 15:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:08.381 00:48:08.381 real 0m52.180s 00:48:08.381 user 1m15.322s 00:48:08.381 sys 0m16.894s 00:48:08.381 15:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:08.381 15:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:48:08.381 ************************************ 00:48:08.381 END TEST nvmf_abort_qd_sizes 00:48:08.381 ************************************ 00:48:08.640 15:52:36 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:08.640 15:52:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:48:08.640 15:52:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:08.640 15:52:36 -- common/autotest_common.sh@10 -- # set +x 00:48:08.640 ************************************ 00:48:08.640 START TEST keyring_file 00:48:08.640 ************************************ 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:48:08.640 * Looking for test storage... 00:48:08.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:08.640 15:52:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:08.640 --rc genhtml_branch_coverage=1 00:48:08.640 --rc genhtml_function_coverage=1 00:48:08.640 --rc genhtml_legend=1 00:48:08.640 --rc geninfo_all_blocks=1 00:48:08.640 --rc geninfo_unexecuted_blocks=1 00:48:08.640 00:48:08.640 ' 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:08.640 --rc genhtml_branch_coverage=1 00:48:08.640 --rc genhtml_function_coverage=1 00:48:08.640 --rc genhtml_legend=1 00:48:08.640 --rc geninfo_all_blocks=1 00:48:08.640 --rc geninfo_unexecuted_blocks=1 00:48:08.640 00:48:08.640 ' 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:08.640 --rc genhtml_branch_coverage=1 00:48:08.640 --rc genhtml_function_coverage=1 00:48:08.640 --rc genhtml_legend=1 00:48:08.640 --rc geninfo_all_blocks=1 00:48:08.640 --rc geninfo_unexecuted_blocks=1 00:48:08.640 00:48:08.640 ' 00:48:08.640 15:52:36 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:08.640 --rc genhtml_branch_coverage=1 00:48:08.640 --rc genhtml_function_coverage=1 00:48:08.640 --rc genhtml_legend=1 00:48:08.640 --rc geninfo_all_blocks=1 00:48:08.640 --rc geninfo_unexecuted_blocks=1 00:48:08.640 00:48:08.640 ' 00:48:08.640 15:52:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:08.640 15:52:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:08.640 15:52:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:08.641 15:52:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:48:08.641 15:52:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:08.641 15:52:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:08.641 15:52:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:08.641 15:52:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.641 15:52:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.641 15:52:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.641 15:52:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:48:08.641 15:52:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:08.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:08.641 15:52:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:48:08.641 15:52:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:08.641 15:52:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ojOaDoxG6J 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ojOaDoxG6J 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ojOaDoxG6J 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ojOaDoxG6J 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H7jjuBXZrK 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:08.900 15:52:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H7jjuBXZrK 00:48:08.900 15:52:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H7jjuBXZrK 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.H7jjuBXZrK 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=43647 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:08.900 15:52:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 43647 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 43647 ']' 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:08.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:08.900 15:52:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:08.900 [2024-11-06 15:52:36.459313] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:48:08.900 [2024-11-06 15:52:36.459420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43647 ] 00:48:09.160 [2024-11-06 15:52:36.565763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:09.160 [2024-11-06 15:52:36.668890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:10.096 15:52:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:10.096 [2024-11-06 15:52:37.489044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:10.096 null0 00:48:10.096 [2024-11-06 15:52:37.521077] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:10.096 [2024-11-06 15:52:37.521519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:10.096 15:52:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:10.096 [2024-11-06 15:52:37.549124] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:48:10.096 request: 00:48:10.096 { 00:48:10.096 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:48:10.096 "secure_channel": false, 00:48:10.096 "listen_address": { 00:48:10.096 "trtype": "tcp", 00:48:10.096 "traddr": "127.0.0.1", 00:48:10.096 "trsvcid": "4420" 00:48:10.096 }, 00:48:10.096 "method": "nvmf_subsystem_add_listener", 00:48:10.096 "req_id": 1 00:48:10.096 } 00:48:10.096 Got JSON-RPC error response 00:48:10.096 response: 00:48:10.096 { 00:48:10.096 "code": -32602, 00:48:10.096 "message": "Invalid parameters" 00:48:10.096 } 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:10.096 15:52:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=43831 00:48:10.096 15:52:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 43831 /var/tmp/bperf.sock 00:48:10.096 15:52:37 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 43831 ']' 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:10.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:10.096 15:52:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:10.096 [2024-11-06 15:52:37.627797] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:48:10.096 [2024-11-06 15:52:37.627884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43831 ] 00:48:10.356 [2024-11-06 15:52:37.752438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:10.356 [2024-11-06 15:52:37.855787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:10.924 15:52:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:10.924 15:52:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:10.924 15:52:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:10.924 15:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:11.183 15:52:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.H7jjuBXZrK 00:48:11.183 15:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.H7jjuBXZrK 00:48:11.183 15:52:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:48:11.183 15:52:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:48:11.183 15:52:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:11.183 15:52:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:11.183 15:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:11.443 15:52:38 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ojOaDoxG6J == \/\t\m\p\/\t\m\p\.\o\j\O\a\D\o\x\G\6\J ]] 00:48:11.443 15:52:38 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:48:11.443 15:52:38 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:48:11.443 15:52:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:11.443 15:52:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:11.443 15:52:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:11.702 15:52:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.H7jjuBXZrK == \/\t\m\p\/\t\m\p\.\H\7\j\j\u\B\X\Z\r\K ]] 00:48:11.702 15:52:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:48:11.702 15:52:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:11.702 15:52:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:11.702 15:52:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:11.702 15:52:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:11.702 15:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:11.961 15:52:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:48:11.961 15:52:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:11.961 15:52:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:48:11.961 15:52:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:11.961 15:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:12.219 [2024-11-06 15:52:39.691533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:12.219 nvme0n1 00:48:12.219 15:52:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:48:12.219 15:52:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:12.219 15:52:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:12.219 15:52:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:12.219 15:52:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:12.219 15:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:12.477 15:52:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:48:12.477 15:52:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:48:12.477 15:52:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:12.478 15:52:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:12.478 15:52:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:12.478 15:52:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:12.478 15:52:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:12.736 15:52:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:48:12.736 15:52:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:12.736 Running I/O for 1 seconds... 00:48:13.672 15398.00 IOPS, 60.15 MiB/s 00:48:13.672 Latency(us) 00:48:13.672 [2024-11-06T14:52:41.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:13.672 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:48:13.672 nvme0n1 : 1.00 15450.81 60.35 0.00 0.00 8267.47 3120.76 18350.08 00:48:13.672 [2024-11-06T14:52:41.310Z] =================================================================================================================== 00:48:13.672 [2024-11-06T14:52:41.310Z] Total : 15450.81 60.35 0.00 0.00 8267.47 3120.76 18350.08 00:48:13.672 { 00:48:13.672 "results": [ 00:48:13.672 { 00:48:13.672 "job": "nvme0n1", 00:48:13.672 "core_mask": "0x2", 00:48:13.672 "workload": "randrw", 00:48:13.672 "percentage": 50, 00:48:13.672 "status": "finished", 00:48:13.672 "queue_depth": 128, 00:48:13.672 "io_size": 4096, 00:48:13.672 "runtime": 1.004931, 00:48:13.672 "iops": 15450.81204580215, 00:48:13.672 "mibps": 60.354734553914646, 00:48:13.672 "io_failed": 0, 00:48:13.672 "io_timeout": 0, 00:48:13.672 "avg_latency_us": 8267.474019756675, 00:48:13.672 "min_latency_us": 3120.7619047619046, 00:48:13.672 "max_latency_us": 18350.08 00:48:13.672 } 00:48:13.672 ], 00:48:13.672 "core_count": 1 00:48:13.672 } 00:48:13.931 15:52:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:13.931 15:52:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:13.931 15:52:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:14.190 15:52:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:48:14.190 15:52:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:48:14.190 15:52:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:14.190 15:52:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:14.190 15:52:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:14.190 15:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:14.190 15:52:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:14.449 15:52:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:48:14.449 15:52:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:14.449 15:52:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:14.449 15:52:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:48:14.449 [2024-11-06 15:52:42.067643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:14.449 [2024-11-06 15:52:42.067970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332f00 (107): Transport endpoint is not connected 00:48:14.449 [2024-11-06 15:52:42.068951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332f00 (9): Bad file descriptor 00:48:14.449 [2024-11-06 15:52:42.069946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:14.449 [2024-11-06 15:52:42.069973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:14.449 [2024-11-06 15:52:42.069987] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:14.449 [2024-11-06 15:52:42.070000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:14.449 request: 00:48:14.449 { 00:48:14.449 "name": "nvme0", 00:48:14.449 "trtype": "tcp", 00:48:14.449 "traddr": "127.0.0.1", 00:48:14.450 "adrfam": "ipv4", 00:48:14.450 "trsvcid": "4420", 00:48:14.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:14.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:14.450 "prchk_reftag": false, 00:48:14.450 "prchk_guard": false, 00:48:14.450 "hdgst": false, 00:48:14.450 "ddgst": false, 00:48:14.450 "psk": "key1", 00:48:14.450 "allow_unrecognized_csi": false, 00:48:14.450 "method": "bdev_nvme_attach_controller", 00:48:14.450 "req_id": 1 00:48:14.450 } 00:48:14.450 Got JSON-RPC error response 00:48:14.450 response: 00:48:14.450 { 00:48:14.450 "code": -5, 00:48:14.450 "message": "Input/output error" 00:48:14.450 } 00:48:14.450 15:52:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:14.450 15:52:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:14.450 15:52:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:14.450 15:52:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:14.709 15:52:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:14.709 15:52:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:48:14.709 15:52:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:14.709 15:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:14.968 15:52:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:48:14.968 15:52:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:48:14.968 15:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:15.225 15:52:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:48:15.225 15:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:48:15.483 15:52:42 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:48:15.483 15:52:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:15.483 15:52:42 keyring_file -- keyring/file.sh@78 -- # jq length 00:48:15.483 15:52:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:48:15.483 15:52:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ojOaDoxG6J 00:48:15.483 15:52:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:15.483 15:52:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:15.483 15:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:15.742 [2024-11-06 15:52:43.242495] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ojOaDoxG6J': 0100660 00:48:15.742 [2024-11-06 15:52:43.242529] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:48:15.742 request: 00:48:15.742 { 00:48:15.742 "name": "key0", 00:48:15.742 "path": "/tmp/tmp.ojOaDoxG6J", 00:48:15.742 "method": "keyring_file_add_key", 00:48:15.742 "req_id": 1 00:48:15.742 } 00:48:15.742 Got JSON-RPC error response 00:48:15.742 response: 00:48:15.742 { 00:48:15.742 "code": -1, 00:48:15.742 "message": "Operation not permitted" 00:48:15.742 } 00:48:15.742 15:52:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:15.742 15:52:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:15.742 15:52:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:15.742 15:52:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:15.742 15:52:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ojOaDoxG6J 00:48:15.742 15:52:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:15.742 15:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ojOaDoxG6J 00:48:16.001 15:52:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ojOaDoxG6J 00:48:16.001 15:52:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:48:16.001 15:52:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:16.001 15:52:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:16.001 15:52:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:16.001 15:52:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:16.001 15:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:16.260 15:52:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:48:16.260 15:52:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:16.260 15:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:16.260 [2024-11-06 15:52:43.876233] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ojOaDoxG6J': No such file or directory 00:48:16.260 [2024-11-06 15:52:43.876267] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:48:16.260 [2024-11-06 15:52:43.876287] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:48:16.260 [2024-11-06 15:52:43.876299] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:48:16.260 [2024-11-06 15:52:43.876310] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:48:16.260 [2024-11-06 15:52:43.876320] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:48:16.260 request: 00:48:16.260 { 00:48:16.260 "name": "nvme0", 00:48:16.260 "trtype": "tcp", 00:48:16.260 "traddr": "127.0.0.1", 00:48:16.260 "adrfam": "ipv4", 00:48:16.260 "trsvcid": "4420", 00:48:16.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:16.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:16.260 "prchk_reftag": false, 00:48:16.260 "prchk_guard": false, 00:48:16.260 "hdgst": false, 00:48:16.260 "ddgst": false, 00:48:16.260 "psk": "key0", 00:48:16.260 "allow_unrecognized_csi": false, 00:48:16.260 "method": "bdev_nvme_attach_controller", 00:48:16.260 "req_id": 1 00:48:16.260 } 00:48:16.260 Got JSON-RPC error response 00:48:16.260 response: 00:48:16.260 { 00:48:16.260 "code": -19, 00:48:16.260 "message": "No such device" 00:48:16.260 } 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:16.260 15:52:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:16.260 15:52:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:48:16.260 15:52:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:16.519 15:52:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h6S00TEJa7 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:48:16.519 15:52:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h6S00TEJa7 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h6S00TEJa7 00:48:16.519 15:52:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.h6S00TEJa7 00:48:16.519 15:52:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6S00TEJa7 00:48:16.519 15:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h6S00TEJa7 00:48:16.778 15:52:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:16.778 15:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:17.037 nvme0n1 00:48:17.037 15:52:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:48:17.037 15:52:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:17.037 15:52:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:17.037 15:52:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:17.037 15:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:17.037 15:52:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:17.295 15:52:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:48:17.295 15:52:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:48:17.295 15:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:48:17.554 15:52:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:48:17.554 15:52:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:48:17.554 15:52:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:17.554 15:52:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:17.554 15:52:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:17.554 15:52:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:48:17.554 15:52:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:48:17.554 15:52:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:17.554 15:52:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:17.554 15:52:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:17.554 15:52:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:17.554 15:52:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:17.812 15:52:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:48:17.812 15:52:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:17.812 15:52:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:18.070 15:52:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:48:18.070 15:52:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:48:18.070 15:52:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:18.329 15:52:45 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:48:18.329 15:52:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h6S00TEJa7 00:48:18.329 15:52:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h6S00TEJa7 00:48:18.329 15:52:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.H7jjuBXZrK 00:48:18.329 15:52:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.H7jjuBXZrK 00:48:18.588 15:52:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:18.588 15:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:18.847 nvme0n1 00:48:18.847 15:52:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:48:18.847 15:52:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:48:19.106 15:52:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:48:19.106 "subsystems": [ 00:48:19.106 { 00:48:19.106 "subsystem": "keyring", 00:48:19.106 "config": [ 00:48:19.106 { 00:48:19.106 "method": "keyring_file_add_key", 00:48:19.106 "params": { 00:48:19.106 "name": "key0", 00:48:19.106 "path": "/tmp/tmp.h6S00TEJa7" 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "keyring_file_add_key", 00:48:19.106 "params": { 00:48:19.106 "name": "key1", 00:48:19.106 "path": "/tmp/tmp.H7jjuBXZrK" 00:48:19.106 } 00:48:19.106 } 00:48:19.106 ] 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "subsystem": "iobuf", 00:48:19.106 "config": [ 00:48:19.106 { 00:48:19.106 "method": "iobuf_set_options", 00:48:19.106 "params": { 00:48:19.106 "small_pool_count": 8192, 00:48:19.106 "large_pool_count": 1024, 00:48:19.106 "small_bufsize": 8192, 00:48:19.106 "large_bufsize": 135168, 00:48:19.106 "enable_numa": false 00:48:19.106 } 00:48:19.106 } 00:48:19.106 ] 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "subsystem": "sock", 00:48:19.106 "config": [ 00:48:19.106 { 00:48:19.106 "method": "sock_set_default_impl", 00:48:19.106 "params": { 00:48:19.106 "impl_name": "posix" 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "sock_impl_set_options", 00:48:19.106 "params": { 00:48:19.106 "impl_name": "ssl", 00:48:19.106 "recv_buf_size": 4096, 00:48:19.106 "send_buf_size": 4096, 00:48:19.106 "enable_recv_pipe": true, 00:48:19.106 "enable_quickack": false, 00:48:19.106 "enable_placement_id": 0, 00:48:19.106 "enable_zerocopy_send_server": true, 00:48:19.106 "enable_zerocopy_send_client": false, 00:48:19.106 "zerocopy_threshold": 0, 00:48:19.106 "tls_version": 0, 00:48:19.106 "enable_ktls": false 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "sock_impl_set_options", 00:48:19.106 "params": { 00:48:19.106 "impl_name": "posix", 00:48:19.106 "recv_buf_size": 2097152, 00:48:19.106 "send_buf_size": 2097152, 00:48:19.106 "enable_recv_pipe": true, 00:48:19.106 "enable_quickack": false, 00:48:19.106 "enable_placement_id": 0, 00:48:19.106 "enable_zerocopy_send_server": true, 00:48:19.106 "enable_zerocopy_send_client": false, 00:48:19.106 "zerocopy_threshold": 0, 00:48:19.106 "tls_version": 0, 00:48:19.106 "enable_ktls": false 00:48:19.106 } 00:48:19.106 } 00:48:19.106 ] 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "subsystem": "vmd", 00:48:19.106 "config": [] 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "subsystem": "accel", 00:48:19.106 "config": [ 00:48:19.106 { 00:48:19.106 "method": "accel_set_options", 00:48:19.106 "params": { 00:48:19.106 "small_cache_size": 128, 00:48:19.106 "large_cache_size": 16, 00:48:19.106 "task_count": 2048, 00:48:19.106 "sequence_count": 2048, 00:48:19.106 "buf_count": 2048 00:48:19.106 } 00:48:19.106 } 00:48:19.106 ] 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "subsystem": "bdev", 00:48:19.106 "config": [ 00:48:19.106 { 00:48:19.106 "method": "bdev_set_options", 00:48:19.106 "params": { 00:48:19.106 "bdev_io_pool_size": 65535, 00:48:19.106 "bdev_io_cache_size": 256, 00:48:19.106 "bdev_auto_examine": true, 00:48:19.106 "iobuf_small_cache_size": 128, 00:48:19.106 "iobuf_large_cache_size": 16 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "bdev_raid_set_options", 00:48:19.106 "params": { 00:48:19.106 "process_window_size_kb": 1024, 00:48:19.106 "process_max_bandwidth_mb_sec": 0 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "bdev_iscsi_set_options", 00:48:19.106 "params": { 00:48:19.106 "timeout_sec": 30 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "bdev_nvme_set_options", 00:48:19.106 "params": { 00:48:19.106 "action_on_timeout": "none", 00:48:19.106 "timeout_us": 0, 00:48:19.106 "timeout_admin_us": 0, 00:48:19.106 "keep_alive_timeout_ms": 10000, 00:48:19.106 "arbitration_burst": 0, 00:48:19.106 "low_priority_weight": 0, 00:48:19.106 "medium_priority_weight": 0, 00:48:19.106 "high_priority_weight": 0, 00:48:19.106 "nvme_adminq_poll_period_us": 10000, 00:48:19.106 "nvme_ioq_poll_period_us": 0, 00:48:19.106 "io_queue_requests": 512, 00:48:19.106 "delay_cmd_submit": true, 00:48:19.106 "transport_retry_count": 4, 00:48:19.106 "bdev_retry_count": 3, 00:48:19.106 "transport_ack_timeout": 0, 00:48:19.106 "ctrlr_loss_timeout_sec": 0, 00:48:19.106 "reconnect_delay_sec": 0, 00:48:19.106 "fast_io_fail_timeout_sec": 0, 00:48:19.106 "disable_auto_failback": false, 00:48:19.106 "generate_uuids": false, 00:48:19.106 "transport_tos": 0, 00:48:19.106 "nvme_error_stat": false, 00:48:19.106 "rdma_srq_size": 0, 00:48:19.106 "io_path_stat": false, 00:48:19.106 "allow_accel_sequence": false, 00:48:19.106 "rdma_max_cq_size": 0, 00:48:19.106 "rdma_cm_event_timeout_ms": 0, 00:48:19.106 "dhchap_digests": [ 00:48:19.106 "sha256", 00:48:19.106 "sha384", 00:48:19.106 "sha512" 00:48:19.106 ], 00:48:19.106 "dhchap_dhgroups": [ 00:48:19.106 "null", 00:48:19.106 "ffdhe2048", 00:48:19.106 "ffdhe3072", 00:48:19.106 "ffdhe4096", 00:48:19.106 "ffdhe6144", 00:48:19.106 "ffdhe8192" 00:48:19.106 ] 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "bdev_nvme_attach_controller", 00:48:19.106 "params": { 00:48:19.106 "name": "nvme0", 00:48:19.106 "trtype": "TCP", 00:48:19.106 "adrfam": "IPv4", 00:48:19.106 "traddr": "127.0.0.1", 00:48:19.106 "trsvcid": "4420", 00:48:19.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:19.106 "prchk_reftag": false, 00:48:19.106 "prchk_guard": false, 00:48:19.106 "ctrlr_loss_timeout_sec": 0, 00:48:19.106 "reconnect_delay_sec": 0, 00:48:19.106 "fast_io_fail_timeout_sec": 0, 00:48:19.106 "psk": "key0", 00:48:19.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:19.106 "hdgst": false, 00:48:19.106 "ddgst": false, 00:48:19.106 "multipath": "multipath" 00:48:19.106 } 00:48:19.106 }, 00:48:19.106 { 00:48:19.106 "method": "bdev_nvme_set_hotplug", 00:48:19.106 "params": { 00:48:19.107 "period_us": 100000, 00:48:19.107 "enable": false 00:48:19.107 } 00:48:19.107 }, 00:48:19.107 { 00:48:19.107 "method": "bdev_wait_for_examine" 00:48:19.107 } 00:48:19.107 ] 00:48:19.107 }, 00:48:19.107 { 00:48:19.107 "subsystem": "nbd", 00:48:19.107 "config": [] 00:48:19.107 } 00:48:19.107 ] 00:48:19.107 }' 00:48:19.107 15:52:46 keyring_file -- keyring/file.sh@115 -- # killprocess 43831 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 43831 ']' 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 43831 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 43831 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 43831' 00:48:19.107 killing process with pid 43831 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@971 -- # kill 43831 00:48:19.107 Received shutdown signal, test time was about 1.000000 seconds 00:48:19.107 00:48:19.107 Latency(us) 00:48:19.107 [2024-11-06T14:52:46.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:19.107 [2024-11-06T14:52:46.745Z] =================================================================================================================== 00:48:19.107 [2024-11-06T14:52:46.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:19.107 15:52:46 keyring_file -- common/autotest_common.sh@976 -- # wait 43831 00:48:20.043 15:52:47 keyring_file -- keyring/file.sh@118 -- # bperfpid=45409 00:48:20.043 15:52:47 keyring_file -- keyring/file.sh@120 -- # waitforlisten 45409 /var/tmp/bperf.sock 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 45409 ']' 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:20.043 15:52:47 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:20.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:20.043 15:52:47 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:48:20.043 "subsystems": [ 00:48:20.043 { 00:48:20.043 "subsystem": "keyring", 00:48:20.043 "config": [ 00:48:20.043 { 00:48:20.043 "method": "keyring_file_add_key", 00:48:20.043 "params": { 00:48:20.043 "name": "key0", 00:48:20.043 "path": "/tmp/tmp.h6S00TEJa7" 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "keyring_file_add_key", 00:48:20.043 "params": { 00:48:20.043 "name": "key1", 00:48:20.043 "path": "/tmp/tmp.H7jjuBXZrK" 00:48:20.043 } 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "iobuf", 00:48:20.043 "config": [ 00:48:20.043 { 00:48:20.043 "method": "iobuf_set_options", 00:48:20.043 "params": { 00:48:20.043 "small_pool_count": 8192, 00:48:20.043 "large_pool_count": 1024, 00:48:20.043 "small_bufsize": 8192, 00:48:20.043 "large_bufsize": 135168, 00:48:20.043 "enable_numa": false 00:48:20.043 } 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "sock", 00:48:20.043 "config": [ 00:48:20.043 { 00:48:20.043 "method": "sock_set_default_impl", 00:48:20.043 "params": { 00:48:20.043 "impl_name": "posix" 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "sock_impl_set_options", 00:48:20.043 "params": { 00:48:20.043 "impl_name": "ssl", 00:48:20.043 "recv_buf_size": 4096, 00:48:20.043 "send_buf_size": 4096, 00:48:20.043 "enable_recv_pipe": true, 00:48:20.043 "enable_quickack": false, 00:48:20.043 "enable_placement_id": 0, 00:48:20.043 "enable_zerocopy_send_server": true, 00:48:20.043 "enable_zerocopy_send_client": false, 00:48:20.043 "zerocopy_threshold": 0, 00:48:20.043 "tls_version": 0, 00:48:20.043 "enable_ktls": false 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "sock_impl_set_options", 00:48:20.043 "params": { 00:48:20.043 "impl_name": "posix", 00:48:20.043 "recv_buf_size": 2097152, 00:48:20.043 "send_buf_size": 2097152, 00:48:20.043 "enable_recv_pipe": true, 00:48:20.043 "enable_quickack": false, 00:48:20.043 "enable_placement_id": 0, 00:48:20.043 "enable_zerocopy_send_server": true, 00:48:20.043 "enable_zerocopy_send_client": false, 00:48:20.043 "zerocopy_threshold": 0, 00:48:20.043 "tls_version": 0, 00:48:20.043 "enable_ktls": false 00:48:20.043 } 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "vmd", 00:48:20.043 "config": [] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "accel", 00:48:20.043 "config": [ 00:48:20.043 { 00:48:20.043 "method": "accel_set_options", 00:48:20.043 "params": { 00:48:20.043 "small_cache_size": 128, 00:48:20.043 "large_cache_size": 16, 00:48:20.043 "task_count": 2048, 00:48:20.043 "sequence_count": 2048, 00:48:20.043 "buf_count": 2048 00:48:20.043 } 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "bdev", 00:48:20.043 "config": [ 00:48:20.043 { 00:48:20.043 "method": "bdev_set_options", 00:48:20.043 "params": { 00:48:20.043 "bdev_io_pool_size": 65535, 00:48:20.043 "bdev_io_cache_size": 256, 00:48:20.043 "bdev_auto_examine": true, 00:48:20.043 "iobuf_small_cache_size": 128, 00:48:20.043 "iobuf_large_cache_size": 16 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_raid_set_options", 00:48:20.043 "params": { 00:48:20.043 "process_window_size_kb": 1024, 00:48:20.043 "process_max_bandwidth_mb_sec": 0 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_iscsi_set_options", 00:48:20.043 "params": { 00:48:20.043 "timeout_sec": 30 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_nvme_set_options", 00:48:20.043 "params": { 00:48:20.043 "action_on_timeout": "none", 00:48:20.043 "timeout_us": 0, 00:48:20.043 "timeout_admin_us": 0, 00:48:20.043 "keep_alive_timeout_ms": 10000, 00:48:20.043 "arbitration_burst": 0, 00:48:20.043 "low_priority_weight": 0, 00:48:20.043 "medium_priority_weight": 0, 00:48:20.043 "high_priority_weight": 0, 00:48:20.043 "nvme_adminq_poll_period_us": 10000, 00:48:20.043 "nvme_ioq_poll_period_us": 0, 00:48:20.043 "io_queue_requests": 512, 00:48:20.043 "delay_cmd_submit": true, 00:48:20.043 "transport_retry_count": 4, 00:48:20.043 "bdev_retry_count": 3, 00:48:20.043 "transport_ack_timeout": 0, 00:48:20.043 "ctrlr_loss_timeout_sec": 0, 00:48:20.043 "reconnect_delay_sec": 0, 00:48:20.043 "fast_io_fail_timeout_sec": 0, 00:48:20.043 "disable_auto_failback": false, 00:48:20.043 "generate_uuids": false, 00:48:20.043 "transport_tos": 0, 00:48:20.043 "nvme_error_stat": false, 00:48:20.043 "rdma_srq_size": 0, 00:48:20.043 "io_path_stat": false, 00:48:20.043 "allow_accel_sequence": false, 00:48:20.043 "rdma_max_cq_size": 0, 00:48:20.043 "rdma_cm_event_timeout_ms": 0, 00:48:20.043 "dhchap_digests": [ 00:48:20.043 "sha256", 00:48:20.043 "sha384", 00:48:20.043 "sha512" 00:48:20.043 ], 00:48:20.043 "dhchap_dhgroups": [ 00:48:20.043 "null", 00:48:20.043 "ffdhe2048", 00:48:20.043 "ffdhe3072", 00:48:20.043 "ffdhe4096", 00:48:20.043 "ffdhe6144", 00:48:20.043 "ffdhe8192" 00:48:20.043 ] 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_nvme_attach_controller", 00:48:20.043 "params": { 00:48:20.043 "name": "nvme0", 00:48:20.043 "trtype": "TCP", 00:48:20.043 "adrfam": "IPv4", 00:48:20.043 "traddr": "127.0.0.1", 00:48:20.043 "trsvcid": "4420", 00:48:20.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:20.043 "prchk_reftag": false, 00:48:20.043 "prchk_guard": false, 00:48:20.043 "ctrlr_loss_timeout_sec": 0, 00:48:20.043 "reconnect_delay_sec": 0, 00:48:20.043 "fast_io_fail_timeout_sec": 0, 00:48:20.043 "psk": "key0", 00:48:20.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:20.043 "hdgst": false, 00:48:20.043 "ddgst": false, 00:48:20.043 "multipath": "multipath" 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_nvme_set_hotplug", 00:48:20.043 "params": { 00:48:20.043 "period_us": 100000, 00:48:20.043 "enable": false 00:48:20.043 } 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "method": "bdev_wait_for_examine" 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }, 00:48:20.043 { 00:48:20.043 "subsystem": "nbd", 00:48:20.043 "config": [] 00:48:20.043 } 00:48:20.043 ] 00:48:20.043 }' 00:48:20.043 15:52:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:20.043 [2024-11-06 15:52:47.626912] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:48:20.043 [2024-11-06 15:52:47.626997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45409 ] 00:48:20.302 [2024-11-06 15:52:47.752018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:20.302 [2024-11-06 15:52:47.862906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:20.869 [2024-11-06 15:52:48.251152] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:20.869 15:52:48 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:20.869 15:52:48 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:20.869 15:52:48 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:48:20.869 15:52:48 keyring_file -- keyring/file.sh@121 -- # jq length 00:48:20.869 15:52:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:21.127 15:52:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:48:21.127 15:52:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:48:21.127 15:52:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:21.127 15:52:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:21.127 15:52:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:21.127 15:52:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:21.127 15:52:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:21.385 15:52:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:48:21.385 15:52:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:48:21.385 15:52:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:21.385 15:52:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:21.385 15:52:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:21.385 15:52:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:21.385 15:52:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:48:21.644 15:52:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@1 -- # cleanup 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.h6S00TEJa7 /tmp/tmp.H7jjuBXZrK 00:48:21.644 15:52:49 keyring_file -- keyring/file.sh@20 -- # killprocess 45409 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 45409 ']' 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@956 -- # kill -0 45409 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 45409 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 45409' 00:48:21.644 killing process with pid 45409 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@971 -- # kill 45409 00:48:21.644 Received shutdown signal, test time was about 1.000000 seconds 00:48:21.644 00:48:21.644 Latency(us) 00:48:21.644 [2024-11-06T14:52:49.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:21.644 [2024-11-06T14:52:49.282Z] =================================================================================================================== 00:48:21.644 [2024-11-06T14:52:49.282Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:21.644 15:52:49 keyring_file -- common/autotest_common.sh@976 -- # wait 45409 00:48:22.581 15:52:50 keyring_file -- keyring/file.sh@21 -- # killprocess 43647 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 43647 ']' 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@956 -- # kill -0 43647 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 43647 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 43647' 00:48:22.581 killing process with pid 43647 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@971 -- # kill 43647 00:48:22.581 15:52:50 keyring_file -- common/autotest_common.sh@976 -- # wait 43647 00:48:25.110 00:48:25.110 real 0m16.442s 00:48:25.110 user 0m35.842s 00:48:25.110 sys 0m2.984s 00:48:25.110 15:52:52 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:25.110 15:52:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:25.110 ************************************ 00:48:25.110 END TEST keyring_file 00:48:25.110 ************************************ 00:48:25.110 15:52:52 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:48:25.110 15:52:52 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:25.110 15:52:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:48:25.110 15:52:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:25.110 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:48:25.110 ************************************ 00:48:25.110 START TEST keyring_linux 00:48:25.110 ************************************ 00:48:25.110 15:52:52 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:48:25.110 Joined session keyring: 611524655 00:48:25.110 * Looking for test storage... 00:48:25.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:48:25.110 15:52:52 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:25.110 15:52:52 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:48:25.110 15:52:52 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:25.110 15:52:52 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:25.110 15:52:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:48:25.369 15:52:52 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:25.369 15:52:52 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:25.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.369 --rc genhtml_branch_coverage=1 00:48:25.369 --rc genhtml_function_coverage=1 00:48:25.369 --rc genhtml_legend=1 00:48:25.369 --rc geninfo_all_blocks=1 00:48:25.369 --rc geninfo_unexecuted_blocks=1 00:48:25.369 00:48:25.369 ' 00:48:25.369 15:52:52 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:25.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.369 --rc genhtml_branch_coverage=1 00:48:25.369 --rc genhtml_function_coverage=1 00:48:25.369 --rc genhtml_legend=1 00:48:25.369 --rc geninfo_all_blocks=1 00:48:25.369 --rc geninfo_unexecuted_blocks=1 00:48:25.369 00:48:25.369 ' 00:48:25.369 15:52:52 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:25.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.369 --rc genhtml_branch_coverage=1 00:48:25.369 --rc genhtml_function_coverage=1 00:48:25.369 --rc genhtml_legend=1 00:48:25.369 --rc geninfo_all_blocks=1 00:48:25.369 --rc geninfo_unexecuted_blocks=1 00:48:25.369 00:48:25.369 ' 00:48:25.369 15:52:52 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:25.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:25.369 --rc genhtml_branch_coverage=1 00:48:25.369 --rc genhtml_function_coverage=1 00:48:25.369 --rc genhtml_legend=1 00:48:25.369 --rc geninfo_all_blocks=1 00:48:25.369 --rc geninfo_unexecuted_blocks=1 00:48:25.369 00:48:25.369 ' 00:48:25.369 15:52:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:48:25.369 15:52:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:25.369 15:52:52 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:25.369 15:52:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:25.369 15:52:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.369 15:52:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.369 15:52:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.369 15:52:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:48:25.370 15:52:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:25.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:48:25.370 /tmp/:spdk-test:key0 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:25.370 15:52:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:48:25.370 15:52:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:48:25.370 /tmp/:spdk-test:key1 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=46422 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:48:25.370 15:52:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 46422 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 46422 ']' 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:25.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:25.370 15:52:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:25.370 [2024-11-06 15:52:52.951786] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:48:25.370 [2024-11-06 15:52:52.951878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46422 ] 00:48:25.635 [2024-11-06 15:52:53.076204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:25.635 [2024-11-06 15:52:53.174292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:26.572 15:52:53 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:26.572 15:52:53 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:26.572 15:52:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:48:26.572 15:52:53 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:26.572 15:52:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:26.572 [2024-11-06 15:52:53.994170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:26.572 null0 00:48:26.572 [2024-11-06 15:52:54.026225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:26.572 [2024-11-06 15:52:54.026649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:26.572 15:52:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:48:26.572 205292285 00:48:26.572 15:52:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:48:26.572 40644503 00:48:26.572 15:52:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=46651 00:48:26.572 15:52:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 46651 /var/tmp/bperf.sock 00:48:26.572 15:52:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 46651 ']' 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:26.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:26.572 15:52:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:26.572 [2024-11-06 15:52:54.123832] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:48:26.572 [2024-11-06 15:52:54.123920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46651 ] 00:48:26.830 [2024-11-06 15:52:54.248420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:26.830 [2024-11-06 15:52:54.352678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:27.400 15:52:54 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:27.400 15:52:54 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:27.400 15:52:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:48:27.400 15:52:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:48:27.703 15:52:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:48:27.703 15:52:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:28.021 15:52:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:28.021 15:52:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:28.289 [2024-11-06 15:52:55.792056] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:28.289 nvme0n1 00:48:28.289 15:52:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:48:28.289 15:52:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:48:28.289 15:52:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:28.289 15:52:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:28.289 15:52:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:28.289 15:52:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.548 15:52:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:48:28.548 15:52:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:28.548 15:52:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:48:28.548 15:52:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:48:28.548 15:52:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:28.548 15:52:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:48:28.548 15:52:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@25 -- # sn=205292285 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 205292285 == \2\0\5\2\9\2\2\8\5 ]] 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 205292285 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:48:28.807 15:52:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:28.807 Running I/O for 1 seconds... 00:48:30.185 16815.00 IOPS, 65.68 MiB/s 00:48:30.185 Latency(us) 00:48:30.185 [2024-11-06T14:52:57.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:30.185 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:30.185 nvme0n1 : 1.01 16813.82 65.68 0.00 0.00 7581.13 5242.88 12170.97 00:48:30.185 [2024-11-06T14:52:57.823Z] =================================================================================================================== 00:48:30.185 [2024-11-06T14:52:57.823Z] Total : 16813.82 65.68 0.00 0.00 7581.13 5242.88 12170.97 00:48:30.185 { 00:48:30.185 "results": [ 00:48:30.185 { 00:48:30.185 "job": "nvme0n1", 00:48:30.185 "core_mask": "0x2", 00:48:30.185 "workload": "randread", 00:48:30.186 "status": "finished", 00:48:30.186 "queue_depth": 128, 00:48:30.186 "io_size": 4096, 00:48:30.186 "runtime": 1.007683, 00:48:30.186 "iops": 16813.819425355, 00:48:30.186 "mibps": 65.67898213029297, 00:48:30.186 "io_failed": 0, 00:48:30.186 "io_timeout": 0, 00:48:30.186 "avg_latency_us": 7581.13153267398, 00:48:30.186 "min_latency_us": 5242.88, 00:48:30.186 "max_latency_us": 12170.971428571429 00:48:30.186 } 00:48:30.186 ], 00:48:30.186 "core_count": 1 00:48:30.186 } 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:30.186 15:52:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:30.186 15:52:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:30.186 15:52:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:30.186 15:52:57 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:30.445 15:52:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:30.445 [2024-11-06 15:52:57.999815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:30.445 [2024-11-06 15:52:58.000103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332f00 (107): Transport endpoint is not connected 00:48:30.445 [2024-11-06 15:52:58.001089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000332f00 (9): Bad file descriptor 00:48:30.445 [2024-11-06 15:52:58.002086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:30.445 [2024-11-06 15:52:58.002107] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:30.445 [2024-11-06 15:52:58.002119] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:30.445 [2024-11-06 15:52:58.002135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:30.445 request: 00:48:30.445 { 00:48:30.445 "name": "nvme0", 00:48:30.445 "trtype": "tcp", 00:48:30.445 "traddr": "127.0.0.1", 00:48:30.445 "adrfam": "ipv4", 00:48:30.445 "trsvcid": "4420", 00:48:30.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:30.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:30.445 "prchk_reftag": false, 00:48:30.445 "prchk_guard": false, 00:48:30.445 "hdgst": false, 00:48:30.445 "ddgst": false, 00:48:30.445 "psk": ":spdk-test:key1", 00:48:30.445 "allow_unrecognized_csi": false, 00:48:30.445 "method": "bdev_nvme_attach_controller", 00:48:30.445 "req_id": 1 00:48:30.445 } 00:48:30.445 Got JSON-RPC error response 00:48:30.445 response: 00:48:30.445 { 00:48:30.445 "code": -5, 00:48:30.445 "message": "Input/output error" 00:48:30.445 } 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@33 -- # sn=205292285 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 205292285 00:48:30.445 1 links removed 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@33 -- # sn=40644503 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 40644503 00:48:30.445 1 links removed 00:48:30.445 15:52:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 46651 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 46651 ']' 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 46651 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:30.445 15:52:58 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 46651 00:48:30.702 15:52:58 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:30.702 15:52:58 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:30.702 15:52:58 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 46651' 00:48:30.702 killing process with pid 46651 00:48:30.702 15:52:58 keyring_linux -- common/autotest_common.sh@971 -- # kill 46651 00:48:30.702 Received shutdown signal, test time was about 1.000000 seconds 00:48:30.702 00:48:30.702 Latency(us) 00:48:30.702 [2024-11-06T14:52:58.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:30.702 [2024-11-06T14:52:58.340Z] =================================================================================================================== 00:48:30.702 [2024-11-06T14:52:58.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:30.702 15:52:58 keyring_linux -- common/autotest_common.sh@976 -- # wait 46651 00:48:31.636 15:52:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 46422 00:48:31.636 15:52:58 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 46422 ']' 00:48:31.636 15:52:58 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 46422 00:48:31.636 15:52:58 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:31.636 15:52:58 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:31.636 15:52:58 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 46422 00:48:31.636 15:52:59 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:31.636 15:52:59 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:31.636 15:52:59 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 46422' 00:48:31.636 killing process with pid 46422 00:48:31.636 15:52:59 keyring_linux -- common/autotest_common.sh@971 -- # kill 46422 00:48:31.636 15:52:59 keyring_linux -- common/autotest_common.sh@976 -- # wait 46422 00:48:34.176 00:48:34.176 real 0m8.763s 00:48:34.176 user 0m14.377s 00:48:34.176 sys 0m1.677s 00:48:34.176 15:53:01 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:34.176 15:53:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:34.176 ************************************ 00:48:34.176 END TEST keyring_linux 00:48:34.176 ************************************ 00:48:34.176 15:53:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:34.176 15:53:01 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:48:34.176 15:53:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:34.176 15:53:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:34.176 15:53:01 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:48:34.176 15:53:01 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:48:34.176 15:53:01 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:48:34.176 15:53:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:34.176 15:53:01 -- common/autotest_common.sh@10 -- # set +x 00:48:34.176 15:53:01 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:48:34.176 15:53:01 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:48:34.176 15:53:01 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:48:34.176 15:53:01 -- common/autotest_common.sh@10 -- # set +x 00:48:39.448 INFO: APP EXITING 00:48:39.448 INFO: killing all VMs 00:48:39.448 INFO: killing vhost app 00:48:39.448 INFO: EXIT DONE 00:48:41.354 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:48:41.354 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:48:41.354 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:48:41.354 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:48:41.354 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:48:41.354 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:48:41.612 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:48:44.901 Cleaning 00:48:44.901 Removing: /var/run/dpdk/spdk0/config 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:48:44.901 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:44.901 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:44.901 Removing: /var/run/dpdk/spdk1/config 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:48:44.901 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:48:44.901 Removing: /var/run/dpdk/spdk1/hugepage_info 00:48:44.901 Removing: /var/run/dpdk/spdk2/config 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:48:44.901 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:48:44.901 Removing: /var/run/dpdk/spdk2/hugepage_info 00:48:44.901 Removing: /var/run/dpdk/spdk3/config 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:48:44.901 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:48:44.901 Removing: /var/run/dpdk/spdk3/hugepage_info 00:48:44.901 Removing: /var/run/dpdk/spdk4/config 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:48:44.901 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:48:44.901 Removing: /var/run/dpdk/spdk4/hugepage_info 00:48:44.901 Removing: /dev/shm/bdev_svc_trace.1 00:48:44.901 Removing: /dev/shm/nvmf_trace.0 00:48:44.901 Removing: /dev/shm/spdk_tgt_trace.pid3637933 00:48:44.901 Removing: /var/run/dpdk/spdk0 00:48:44.901 Removing: /var/run/dpdk/spdk1 00:48:44.901 Removing: /var/run/dpdk/spdk2 00:48:44.901 Removing: /var/run/dpdk/spdk3 00:48:44.901 Removing: /var/run/dpdk/spdk4 00:48:44.901 Removing: /var/run/dpdk/spdk_pid11123 00:48:44.901 Removing: /var/run/dpdk/spdk_pid11198 00:48:44.901 Removing: /var/run/dpdk/spdk_pid16460 00:48:44.901 Removing: /var/run/dpdk/spdk_pid18614 00:48:44.901 Removing: /var/run/dpdk/spdk_pid20616 00:48:44.901 Removing: /var/run/dpdk/spdk_pid22063 00:48:44.901 Removing: /var/run/dpdk/spdk_pid24583 00:48:44.901 Removing: /var/run/dpdk/spdk_pid25978 00:48:44.901 Removing: /var/run/dpdk/spdk_pid2612 00:48:44.901 Removing: /var/run/dpdk/spdk_pid35051 00:48:44.901 Removing: /var/run/dpdk/spdk_pid35514 00:48:44.901 Removing: /var/run/dpdk/spdk_pid36114 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3633824 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3635351 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3637933 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3639033 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3640613 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3641309 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3642706 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3642778 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3643564 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3645440 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3647041 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3647802 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3648555 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3649321 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3650075 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3650332 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3650589 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3650878 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3652071 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3655441 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3656034 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3656750 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3656980 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3658630 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3658862 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3660632 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3660749 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3661455 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3661640 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3662194 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3662426 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3663915 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3664166 00:48:44.901 Removing: /var/run/dpdk/spdk_pid3664479 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3668869 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3673368 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3684406 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3685068 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3689614 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3690043 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3694817 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3700954 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3704006 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3715149 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3725102 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3727102 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3728256 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3746268 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3750687 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3836350 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3841942 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3848504 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3858641 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3888207 00:48:44.902 Removing: /var/run/dpdk/spdk_pid38887 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3892958 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3894560 00:48:44.902 Removing: /var/run/dpdk/spdk_pid3896620 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3896934 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3897328 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3897665 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3898697 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3900614 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3902296 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3903042 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3905587 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3906537 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3907505 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3912002 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3917847 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3917848 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3917849 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3922119 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3926596 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3931606 00:48:45.161 Removing: /var/run/dpdk/spdk_pid39358 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3968682 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3972729 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3979067 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3981157 00:48:45.161 Removing: /var/run/dpdk/spdk_pid39823 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3983177 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3985226 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3990334 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3995365 00:48:45.161 Removing: /var/run/dpdk/spdk_pid3999843 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4008417 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4008424 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4013378 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4013608 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4013840 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4014300 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4014310 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4016159 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4017850 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4019575 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4021177 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4022771 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4024375 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4030677 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4031246 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4033326 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4034485 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4041683 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4044839 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4050566 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4056264 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4065284 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4072817 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4072900 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4092366 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4093189 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4093890 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4094812 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4096017 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4096721 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4097430 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4098132 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4102834 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4103190 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4109511 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4109724 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4115371 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4119836 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4130132 00:48:45.161 Removing: /var/run/dpdk/spdk_pid4130817 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4135088 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4135492 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4140037 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4145908 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4148708 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4159497 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4168641 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4170400 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4171495 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4188983 00:48:45.421 Removing: /var/run/dpdk/spdk_pid4193199 00:48:45.421 Removing: /var/run/dpdk/spdk_pid43647 00:48:45.421 Removing: /var/run/dpdk/spdk_pid43831 00:48:45.421 Removing: /var/run/dpdk/spdk_pid45409 00:48:45.421 Removing: /var/run/dpdk/spdk_pid46422 00:48:45.421 Removing: /var/run/dpdk/spdk_pid46651 00:48:45.421 Clean 00:48:45.421 15:53:12 -- common/autotest_common.sh@1451 -- # return 0 00:48:45.421 15:53:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:48:45.421 15:53:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:45.421 15:53:12 -- common/autotest_common.sh@10 -- # set +x 00:48:45.421 15:53:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:48:45.421 15:53:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:45.421 15:53:12 -- common/autotest_common.sh@10 -- # set +x 00:48:45.421 15:53:13 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:45.421 15:53:13 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:48:45.421 15:53:13 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:48:45.421 15:53:13 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:48:45.421 15:53:13 -- spdk/autotest.sh@394 -- # hostname 00:48:45.421 15:53:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:48:45.679 geninfo: WARNING: invalid characters removed from testname! 00:49:07.622 15:53:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:07.622 15:53:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:09.527 15:53:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:10.904 15:53:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:12.808 15:53:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:14.713 15:53:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:16.618 15:53:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:16.618 15:53:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:49:16.618 15:53:43 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:49:16.618 15:53:43 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:49:16.618 15:53:43 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:49:16.618 15:53:43 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:49:16.618 + [[ -n 3557059 ]] 00:49:16.618 + sudo kill 3557059 00:49:16.629 [Pipeline] } 00:49:16.645 [Pipeline] // stage 00:49:16.650 [Pipeline] } 00:49:16.666 [Pipeline] // timeout 00:49:16.671 [Pipeline] } 00:49:16.686 [Pipeline] // catchError 00:49:16.691 [Pipeline] } 00:49:16.707 [Pipeline] // wrap 00:49:16.713 [Pipeline] } 00:49:16.726 [Pipeline] // catchError 00:49:16.736 [Pipeline] stage 00:49:16.739 [Pipeline] { (Epilogue) 00:49:16.753 [Pipeline] catchError 00:49:16.754 [Pipeline] { 00:49:16.768 [Pipeline] echo 00:49:16.770 Cleanup processes 00:49:16.776 [Pipeline] sh 00:49:17.063 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:17.063 58885 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:17.077 [Pipeline] sh 00:49:17.364 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:49:17.364 ++ grep -v 'sudo pgrep' 00:49:17.364 ++ awk '{print $1}' 00:49:17.364 + sudo kill -9 00:49:17.364 + true 00:49:17.376 [Pipeline] sh 00:49:17.661 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:29.884 [Pipeline] sh 00:49:30.169 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:30.169 Artifacts sizes are good 00:49:30.184 [Pipeline] archiveArtifacts 00:49:30.190 Archiving artifacts 00:49:30.369 [Pipeline] sh 00:49:30.710 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:49:30.726 [Pipeline] cleanWs 00:49:30.736 [WS-CLEANUP] Deleting project workspace... 00:49:30.736 [WS-CLEANUP] Deferred wipeout is used... 00:49:30.743 [WS-CLEANUP] done 00:49:30.745 [Pipeline] } 00:49:30.762 [Pipeline] // catchError 00:49:30.774 [Pipeline] sh 00:49:31.057 + logger -p user.info -t JENKINS-CI 00:49:31.065 [Pipeline] } 00:49:31.078 [Pipeline] // stage 00:49:31.083 [Pipeline] } 00:49:31.096 [Pipeline] // node 00:49:31.101 [Pipeline] End of Pipeline 00:49:31.129 Finished: SUCCESS